2025-08-29 14:06:02.371386 | Job console starting 2025-08-29 14:06:02.402324 | Updating git repos 2025-08-29 14:06:02.444992 | Cloning repos into workspace 2025-08-29 14:06:02.613165 | Restoring repo states 2025-08-29 14:06:02.634678 | Merging changes 2025-08-29 14:06:02.634716 | Checking out repos 2025-08-29 14:06:02.830631 | Preparing playbooks 2025-08-29 14:06:03.382382 | Running Ansible setup 2025-08-29 14:06:08.165900 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 14:06:08.928578 | 2025-08-29 14:06:08.928731 | PLAY [Base pre] 2025-08-29 14:06:08.946113 | 2025-08-29 14:06:08.946259 | TASK [Setup log path fact] 2025-08-29 14:06:08.965824 | orchestrator | ok 2025-08-29 14:06:08.984342 | 2025-08-29 14:06:08.984511 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 14:06:09.032090 | orchestrator | ok 2025-08-29 14:06:09.055176 | 2025-08-29 14:06:09.055320 | TASK [emit-job-header : Print job information] 2025-08-29 14:06:09.112488 | # Job Information 2025-08-29 14:06:09.112765 | Ansible Version: 2.16.14 2025-08-29 14:06:09.112827 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-08-29 14:06:09.112888 | Pipeline: post 2025-08-29 14:06:09.112931 | Executor: 521e9411259a 2025-08-29 14:06:09.112967 | Triggered by: https://github.com/osism/testbed/commit/cd40b8d9aeabc9c007d5e73667eb0ed02c89b73a 2025-08-29 14:06:09.113007 | Event ID: 4784273c-84e1-11f0-9d45-642685911fce 2025-08-29 14:06:09.123156 | 2025-08-29 14:06:09.123316 | LOOP [emit-job-header : Print node information] 2025-08-29 14:06:09.274523 | orchestrator | ok: 2025-08-29 14:06:09.274716 | orchestrator | # Node Information 2025-08-29 14:06:09.274750 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 14:06:09.274776 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 14:06:09.274799 | orchestrator | Username: zuul-testbed06 2025-08-29 14:06:09.274819 | orchestrator | Distro: Debian 12.11 2025-08-29 14:06:09.274863 | orchestrator | Provider: static-testbed 2025-08-29 14:06:09.274884 | orchestrator | Region: 2025-08-29 14:06:09.274905 | orchestrator | Label: testbed-orchestrator 2025-08-29 14:06:09.274925 | orchestrator | Product Name: OpenStack Nova 2025-08-29 14:06:09.274945 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 14:06:09.296167 | 2025-08-29 14:06:09.296429 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 14:06:09.948829 | orchestrator -> localhost | changed 2025-08-29 14:06:09.961457 | 2025-08-29 14:06:09.961624 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 14:06:11.182340 | orchestrator -> localhost | changed 2025-08-29 14:06:11.207745 | 2025-08-29 14:06:11.207893 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 14:06:11.498665 | orchestrator -> localhost | ok 2025-08-29 14:06:11.513580 | 2025-08-29 14:06:11.513721 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 14:06:11.543765 | orchestrator | ok 2025-08-29 14:06:11.560232 | orchestrator | included: /var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 14:06:11.568302 | 2025-08-29 14:06:11.568396 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 14:06:12.779747 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 14:06:12.779977 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/work/23526c7215b24797be8b5b736ada2e27_id_rsa 2025-08-29 14:06:12.780016 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/work/23526c7215b24797be8b5b736ada2e27_id_rsa.pub 2025-08-29 14:06:12.780042 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 14:06:12.780066 | orchestrator -> localhost | SHA256:oWll97qNMC+OZ/RoJ5ckRe3xvnslL+x/oBrqpLeTaEI zuul-build-sshkey 2025-08-29 14:06:12.780089 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 14:06:12.780123 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 14:06:12.780146 | orchestrator -> localhost | | . | 2025-08-29 14:06:12.780167 | orchestrator -> localhost | | . o | 2025-08-29 14:06:12.780187 | orchestrator -> localhost | | +... o | 2025-08-29 14:06:12.780207 | orchestrator -> localhost | | = o... . | 2025-08-29 14:06:12.780226 | orchestrator -> localhost | | + S. .. | 2025-08-29 14:06:12.780250 | orchestrator -> localhost | | E. o .. + .| 2025-08-29 14:06:12.780271 | orchestrator -> localhost | | . o+*o. o =.| 2025-08-29 14:06:12.780291 | orchestrator -> localhost | | . o+@===. + +| 2025-08-29 14:06:12.780312 | orchestrator -> localhost | | ooB=B=...o=o| 2025-08-29 14:06:12.780333 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 14:06:12.780389 | orchestrator -> localhost | ok: Runtime: 0:00:00.673616 2025-08-29 14:06:12.788646 | 2025-08-29 14:06:12.788776 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 14:06:12.826265 | orchestrator | ok 2025-08-29 14:06:12.836980 | orchestrator | included: /var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 14:06:12.846519 | 2025-08-29 14:06:12.846635 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 14:06:12.870316 | orchestrator | skipping: Conditional result was False 2025-08-29 14:06:12.879340 | 2025-08-29 14:06:12.879481 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 14:06:13.886068 | orchestrator | changed 2025-08-29 14:06:13.903202 | 2025-08-29 14:06:13.903343 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 14:06:14.181149 | orchestrator | ok 2025-08-29 14:06:14.200778 | 2025-08-29 14:06:14.200921 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 14:06:14.773360 | orchestrator | ok 2025-08-29 14:06:14.779508 | 2025-08-29 14:06:14.779621 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 14:06:15.156866 | orchestrator | ok 2025-08-29 14:06:15.169001 | 2025-08-29 14:06:15.169129 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 14:06:15.193195 | orchestrator | skipping: Conditional result was False 2025-08-29 14:06:15.203065 | 2025-08-29 14:06:15.203204 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 14:06:16.223466 | orchestrator -> localhost | changed 2025-08-29 14:06:16.246384 | 2025-08-29 14:06:16.246565 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 14:06:16.798072 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/work/23526c7215b24797be8b5b736ada2e27_id_rsa (zuul-build-sshkey) 2025-08-29 14:06:16.798326 | orchestrator -> localhost | ok: Runtime: 0:00:00.011211 2025-08-29 14:06:16.805849 | 2025-08-29 14:06:16.805965 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 14:06:17.345864 | orchestrator | ok 2025-08-29 14:06:17.352070 | 2025-08-29 14:06:17.352196 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 14:06:17.409684 | orchestrator | skipping: Conditional result was False 2025-08-29 14:06:17.483749 | 2025-08-29 14:06:17.483896 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 14:06:17.877606 | orchestrator | ok 2025-08-29 14:06:17.893650 | 2025-08-29 14:06:17.893795 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 14:06:17.956190 | orchestrator | ok 2025-08-29 14:06:17.969018 | 2025-08-29 14:06:17.969154 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 14:06:18.468192 | orchestrator -> localhost | ok 2025-08-29 14:06:18.474086 | 2025-08-29 14:06:18.474173 | TASK [validate-host : Collect information about the host] 2025-08-29 14:06:19.643731 | orchestrator | ok 2025-08-29 14:06:19.666515 | 2025-08-29 14:06:19.666975 | TASK [validate-host : Sanitize hostname] 2025-08-29 14:06:19.735851 | orchestrator | ok 2025-08-29 14:06:19.740983 | 2025-08-29 14:06:19.741177 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 14:06:20.253826 | orchestrator -> localhost | changed 2025-08-29 14:06:20.258869 | 2025-08-29 14:06:20.258958 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 14:06:20.755702 | orchestrator | ok 2025-08-29 14:06:20.759814 | 2025-08-29 14:06:20.759888 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 14:06:21.547305 | orchestrator -> localhost | changed 2025-08-29 14:06:21.555698 | 2025-08-29 14:06:21.555779 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 14:06:21.824141 | orchestrator | ok 2025-08-29 14:06:21.830176 | 2025-08-29 14:06:21.830265 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 14:07:07.327603 | orchestrator | changed: 2025-08-29 14:07:07.327829 | orchestrator | .d..t...... src/ 2025-08-29 14:07:07.327865 | orchestrator | .d..t...... src/github.com/ 2025-08-29 14:07:07.327891 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 14:07:07.327913 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 14:07:07.327934 | orchestrator | RedHat.yml 2025-08-29 14:07:07.341270 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 14:07:07.341288 | orchestrator | RedHat.yml 2025-08-29 14:07:07.341342 | orchestrator | = 1.53.0"... 2025-08-29 14:07:19.608348 | orchestrator | 14:07:19.608 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-08-29 14:07:19.642319 | orchestrator | 14:07:19.642 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 14:07:19.804071 | orchestrator | 14:07:19.803 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 14:07:20.481583 | orchestrator | 14:07:20.481 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 14:07:20.553744 | orchestrator | 14:07:20.553 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 14:07:21.017647 | orchestrator | 14:07:21.017 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:07:21.091729 | orchestrator | 14:07:21.091 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 14:07:21.527328 | orchestrator | 14:07:21.527 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:07:21.527560 | orchestrator | 14:07:21.527 STDOUT terraform: Providers are signed by their developers. 2025-08-29 14:07:21.527571 | orchestrator | 14:07:21.527 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 14:07:21.527576 | orchestrator | 14:07:21.527 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 14:07:21.527826 | orchestrator | 14:07:21.527 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 14:07:21.527839 | orchestrator | 14:07:21.527 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 14:07:21.527846 | orchestrator | 14:07:21.527 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 14:07:21.527850 | orchestrator | 14:07:21.527 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 14:07:21.528324 | orchestrator | 14:07:21.528 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 14:07:21.528666 | orchestrator | 14:07:21.528 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 14:07:21.528675 | orchestrator | 14:07:21.528 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 14:07:21.528681 | orchestrator | 14:07:21.528 STDOUT terraform: should now work. 2025-08-29 14:07:21.528685 | orchestrator | 14:07:21.528 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 14:07:21.528690 | orchestrator | 14:07:21.528 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 14:07:21.528695 | orchestrator | 14:07:21.528 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 14:07:21.641062 | orchestrator | 14:07:21.640 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-08-29 14:07:21.641121 | orchestrator | 14:07:21.640 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 14:07:21.825714 | orchestrator | 14:07:21.825 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 14:07:21.825794 | orchestrator | 14:07:21.825 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 14:07:21.825922 | orchestrator | 14:07:21.825 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 14:07:21.825966 | orchestrator | 14:07:21.825 STDOUT terraform: for this configuration. 2025-08-29 14:07:22.247637 | orchestrator | 14:07:22.247 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-08-29 14:07:22.247705 | orchestrator | 14:07:22.247 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 14:07:22.380954 | orchestrator | 14:07:22.380 STDOUT terraform: ci.auto.tfvars 2025-08-29 14:07:23.256093 | orchestrator | 14:07:23.255 STDOUT terraform: default_custom.tf 2025-08-29 14:07:24.369646 | orchestrator | 14:07:24.369 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-08-29 14:07:25.363155 | orchestrator | 14:07:25.362 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 14:07:25.956750 | orchestrator | 14:07:25.956 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 14:07:26.288365 | orchestrator | 14:07:26.288 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 14:07:26.288418 | orchestrator | 14:07:26.288 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 14:07:26.288487 | orchestrator | 14:07:26.288 STDOUT terraform:  + create 2025-08-29 14:07:26.288602 | orchestrator | 14:07:26.288 STDOUT terraform:  <= read (data resources) 2025-08-29 14:07:26.288705 | orchestrator | 14:07:26.288 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 14:07:26.289419 | orchestrator | 14:07:26.288 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 14:07:26.289432 | orchestrator | 14:07:26.288 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:07:26.289436 | orchestrator | 14:07:26.288 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 14:07:26.289441 | orchestrator | 14:07:26.288 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:07:26.289445 | orchestrator | 14:07:26.289 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:07:26.289449 | orchestrator | 14:07:26.289 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:07:26.289453 | orchestrator | 14:07:26.289 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.289457 | orchestrator | 14:07:26.289 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.289471 | orchestrator | 14:07:26.289 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:07:26.289475 | orchestrator | 14:07:26.289 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:07:26.289479 | orchestrator | 14:07:26.289 STDOUT terraform:  + most_recent = true 2025-08-29 14:07:26.289483 | orchestrator | 14:07:26.289 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.289487 | orchestrator | 14:07:26.289 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:07:26.289491 | orchestrator | 14:07:26.289 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.289495 | orchestrator | 14:07:26.289 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:07:26.289499 | orchestrator | 14:07:26.289 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:07:26.289503 | orchestrator | 14:07:26.289 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:07:26.289507 | orchestrator | 14:07:26.289 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:07:26.289511 | orchestrator | 14:07:26.289 STDOUT terraform:  } 2025-08-29 14:07:26.290051 | orchestrator | 14:07:26.289 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 14:07:26.290063 | orchestrator | 14:07:26.289 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:07:26.290067 | orchestrator | 14:07:26.289 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 14:07:26.290071 | orchestrator | 14:07:26.289 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:07:26.290074 | orchestrator | 14:07:26.289 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:07:26.290083 | orchestrator | 14:07:26.289 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:07:26.290087 | orchestrator | 14:07:26.289 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.290091 | orchestrator | 14:07:26.289 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.290094 | orchestrator | 14:07:26.289 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:07:26.290098 | orchestrator | 14:07:26.289 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:07:26.290102 | orchestrator | 14:07:26.289 STDOUT terraform:  + most_recent = true 2025-08-29 14:07:26.290292 | orchestrator | 14:07:26.289 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.290298 | orchestrator | 14:07:26.290 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:07:26.290302 | orchestrator | 14:07:26.290 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.290306 | orchestrator | 14:07:26.290 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:07:26.290310 | orchestrator | 14:07:26.290 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:07:26.290314 | orchestrator | 14:07:26.290 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:07:26.290317 | orchestrator | 14:07:26.290 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:07:26.290321 | orchestrator | 14:07:26.290 STDOUT terraform:  } 2025-08-29 14:07:26.290964 | orchestrator | 14:07:26.290 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 14:07:26.290991 | orchestrator | 14:07:26.290 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 14:07:26.290996 | orchestrator | 14:07:26.290 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:07:26.291000 | orchestrator | 14:07:26.290 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.291004 | orchestrator | 14:07:26.290 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.291007 | orchestrator | 14:07:26.290 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.291011 | orchestrator | 14:07:26.290 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.291015 | orchestrator | 14:07:26.290 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.291048 | orchestrator | 14:07:26.290 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.291053 | orchestrator | 14:07:26.290 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:07:26.291057 | orchestrator | 14:07:26.290 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:07:26.291061 | orchestrator | 14:07:26.290 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 14:07:26.291065 | orchestrator | 14:07:26.290 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.291069 | orchestrator | 14:07:26.290 STDOUT terraform:  } 2025-08-29 14:07:26.291645 | orchestrator | 14:07:26.291 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 14:07:26.291661 | orchestrator | 14:07:26.291 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 14:07:26.291666 | orchestrator | 14:07:26.291 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:07:26.291670 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.291674 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.291677 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.291681 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.291685 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.291696 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.291700 | orchestrator | 14:07:26.291 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:07:26.291704 | orchestrator | 14:07:26.291 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:07:26.291708 | orchestrator | 14:07:26.291 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 14:07:26.291712 | orchestrator | 14:07:26.291 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.291716 | orchestrator | 14:07:26.291 STDOUT terraform:  } 2025-08-29 14:07:26.292224 | orchestrator | 14:07:26.291 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 14:07:26.292232 | orchestrator | 14:07:26.291 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 14:07:26.292236 | orchestrator | 14:07:26.291 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:07:26.292249 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.292253 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.292257 | orchestrator | 14:07:26.291 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.292261 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.292264 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.292268 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.292272 | orchestrator | 14:07:26.292 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:07:26.292276 | orchestrator | 14:07:26.292 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:07:26.292280 | orchestrator | 14:07:26.292 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 14:07:26.292284 | orchestrator | 14:07:26.292 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.292288 | orchestrator | 14:07:26.292 STDOUT terraform:  } 2025-08-29 14:07:26.292827 | orchestrator | 14:07:26.292 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 14:07:26.292836 | orchestrator | 14:07:26.292 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 14:07:26.292842 | orchestrator | 14:07:26.292 STDOUT terraform:  + content = (sensitive value) 2025-08-29 14:07:26.292846 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.292850 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.292854 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.292858 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.292862 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.292866 | orchestrator | 14:07:26.292 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.292869 | orchestrator | 14:07:26.292 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 14:07:26.292873 | orchestrator | 14:07:26.292 STDOUT terraform:  + file_permission = "0600" 2025-08-29 14:07:26.292877 | orchestrator | 14:07:26.292 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 14:07:26.292887 | orchestrator | 14:07:26.292 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.292891 | orchestrator | 14:07:26.292 STDOUT terraform:  } 2025-08-29 14:07:26.293049 | orchestrator | 14:07:26.292 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 14:07:26.293057 | orchestrator | 14:07:26.292 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 14:07:26.293061 | orchestrator | 14:07:26.292 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.293065 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293583 | orchestrator | 14:07:26.293 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 14:07:26.293599 | orchestrator | 14:07:26.293 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 14:07:26.293603 | orchestrator | 14:07:26.293 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.293607 | orchestrator | 14:07:26.293 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.293612 | orchestrator | 14:07:26.293 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.293615 | orchestrator | 14:07:26.293 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.293619 | orchestrator | 14:07:26.293 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.293623 | orchestrator | 14:07:26.293 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 14:07:26.293627 | orchestrator | 14:07:26.293 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.293630 | orchestrator | 14:07:26.293 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.293634 | orchestrator | 14:07:26.293 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.293638 | orchestrator | 14:07:26.293 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.293643 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293776 | orchestrator | 14:07:26.293 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 14:07:26.293832 | orchestrator | 14:07:26.293 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.293884 | orchestrator | 14:07:26.293 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.293930 | orchestrator | 14:07:26.293 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.293977 | orchestrator | 14:07:26.293 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.294039 | orchestrator | 14:07:26.293 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.294087 | orchestrator | 14:07:26.294 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.294140 | orchestrator | 14:07:26.294 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 14:07:26.294215 | orchestrator | 14:07:26.294 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.294248 | orchestrator | 14:07:26.294 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.294282 | orchestrator | 14:07:26.294 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.294313 | orchestrator | 14:07:26.294 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.294333 | orchestrator | 14:07:26.294 STDOUT terraform:  } 2025-08-29 14:07:26.294499 | orchestrator | 14:07:26.294 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 14:07:26.294607 | orchestrator | 14:07:26.294 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.294656 | orchestrator | 14:07:26.294 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.294727 | orchestrator | 14:07:26.294 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.294773 | orchestrator | 14:07:26.294 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.294817 | orchestrator | 14:07:26.294 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.294861 | orchestrator | 14:07:26.294 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.294922 | orchestrator | 14:07:26.294 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 14:07:26.294965 | orchestrator | 14:07:26.294 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.294994 | orchestrator | 14:07:26.294 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.295024 | orchestrator | 14:07:26.295 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.295055 | orchestrator | 14:07:26.295 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.295078 | orchestrator | 14:07:26.295 STDOUT terraform:  } 2025-08-29 14:07:26.295276 | orchestrator | 14:07:26.295 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 14:07:26.295372 | orchestrator | 14:07:26.295 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.295418 | orchestrator | 14:07:26.295 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.295453 | orchestrator | 14:07:26.295 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.295497 | orchestrator | 14:07:26.295 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.295553 | orchestrator | 14:07:26.295 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.295597 | orchestrator | 14:07:26.295 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.295649 | orchestrator | 14:07:26.295 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 14:07:26.295697 | orchestrator | 14:07:26.295 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.295728 | orchestrator | 14:07:26.295 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.295760 | orchestrator | 14:07:26.295 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.295791 | orchestrator | 14:07:26.295 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.295832 | orchestrator | 14:07:26.295 STDOUT terraform:  } 2025-08-29 14:07:26.295990 | orchestrator | 14:07:26.295 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 14:07:26.296045 | orchestrator | 14:07:26.296 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.296086 | orchestrator | 14:07:26.296 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.296122 | orchestrator | 14:07:26.296 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.296168 | orchestrator | 14:07:26.296 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.296210 | orchestrator | 14:07:26.296 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.296252 | orchestrator | 14:07:26.296 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.296311 | orchestrator | 14:07:26.296 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 14:07:26.296356 | orchestrator | 14:07:26.296 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.296385 | orchestrator | 14:07:26.296 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.296470 | orchestrator | 14:07:26.296 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.296505 | orchestrator | 14:07:26.296 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.296542 | orchestrator | 14:07:26.296 STDOUT terraform:  } 2025-08-29 14:07:26.296698 | orchestrator | 14:07:26.296 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 14:07:26.296753 | orchestrator | 14:07:26.296 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.296795 | orchestrator | 14:07:26.296 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.296962 | orchestrator | 14:07:26.296 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.297045 | orchestrator | 14:07:26.297 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.297131 | orchestrator | 14:07:26.297 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.297211 | orchestrator | 14:07:26.297 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.302150 | orchestrator | 14:07:26.297 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 14:07:26.302302 | orchestrator | 14:07:26.302 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.302363 | orchestrator | 14:07:26.302 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.302401 | orchestrator | 14:07:26.302 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.302480 | orchestrator | 14:07:26.302 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.302509 | orchestrator | 14:07:26.302 STDOUT terraform:  } 2025-08-29 14:07:26.303055 | orchestrator | 14:07:26.302 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 14:07:26.303153 | orchestrator | 14:07:26.303 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.303219 | orchestrator | 14:07:26.303 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.303271 | orchestrator | 14:07:26.303 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.303338 | orchestrator | 14:07:26.303 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.303391 | orchestrator | 14:07:26.303 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.303454 | orchestrator | 14:07:26.303 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.303556 | orchestrator | 14:07:26.303 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 14:07:26.303658 | orchestrator | 14:07:26.303 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.303698 | orchestrator | 14:07:26.303 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.303757 | orchestrator | 14:07:26.303 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.303793 | orchestrator | 14:07:26.303 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.303826 | orchestrator | 14:07:26.303 STDOUT terraform:  } 2025-08-29 14:07:26.304029 | orchestrator | 14:07:26.303 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 14:07:26.304100 | orchestrator | 14:07:26.304 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.304193 | orchestrator | 14:07:26.304 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.304242 | orchestrator | 14:07:26.304 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.304287 | orchestrator | 14:07:26.304 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.304343 | orchestrator | 14:07:26.304 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.304405 | orchestrator | 14:07:26.304 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 14:07:26.304457 | orchestrator | 14:07:26.304 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.304495 | orchestrator | 14:07:26.304 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.304553 | orchestrator | 14:07:26.304 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.304587 | orchestrator | 14:07:26.304 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.304610 | orchestrator | 14:07:26.304 STDOUT terraform:  } 2025-08-29 14:07:26.304887 | orchestrator | 14:07:26.304 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 14:07:26.304958 | orchestrator | 14:07:26.304 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.305005 | orchestrator | 14:07:26.304 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.305052 | orchestrator | 14:07:26.305 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.305109 | orchestrator | 14:07:26.305 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.305151 | orchestrator | 14:07:26.305 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.305210 | orchestrator | 14:07:26.305 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 14:07:26.305269 | orchestrator | 14:07:26.305 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.305301 | orchestrator | 14:07:26.305 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.305380 | orchestrator | 14:07:26.305 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.305430 | orchestrator | 14:07:26.305 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.305453 | orchestrator | 14:07:26.305 STDOUT terraform:  } 2025-08-29 14:07:26.305533 | orchestrator | 14:07:26.305 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 14:07:26.305600 | orchestrator | 14:07:26.305 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.305660 | orchestrator | 14:07:26.305 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.305702 | orchestrator | 14:07:26.305 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.305763 | orchestrator | 14:07:26.305 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.305823 | orchestrator | 14:07:26.305 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.305869 | orchestrator | 14:07:26.305 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 14:07:26.305929 | orchestrator | 14:07:26.305 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.305989 | orchestrator | 14:07:26.305 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.306046 | orchestrator | 14:07:26.305 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.306079 | orchestrator | 14:07:26.306 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.306119 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306374 | orchestrator | 14:07:26.306 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 14:07:26.306456 | orchestrator | 14:07:26.306 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.306527 | orchestrator | 14:07:26.306 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.306603 | orchestrator | 14:07:26.306 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.306668 | orchestrator | 14:07:26.306 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.306713 | orchestrator | 14:07:26.306 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.306775 | orchestrator | 14:07:26.306 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 14:07:26.306852 | orchestrator | 14:07:26.306 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.306899 | orchestrator | 14:07:26.306 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.306957 | orchestrator | 14:07:26.306 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.307033 | orchestrator | 14:07:26.306 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.307070 | orchestrator | 14:07:26.307 STDOUT terraform:  } 2025-08-29 14:07:26.307269 | orchestrator | 14:07:26.307 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 14:07:26.307356 | orchestrator | 14:07:26.307 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.307417 | orchestrator | 14:07:26.307 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.307473 | orchestrator | 14:07:26.307 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.307566 | orchestrator | 14:07:26.307 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.307653 | orchestrator | 14:07:26.307 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.307724 | orchestrator | 14:07:26.307 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 14:07:26.307770 | orchestrator | 14:07:26.307 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.307815 | orchestrator | 14:07:26.307 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.307850 | orchestrator | 14:07:26.307 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.307883 | orchestrator | 14:07:26.307 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.307913 | orchestrator | 14:07:26.307 STDOUT terraform:  } 2025-08-29 14:07:26.308376 | orchestrator | 14:07:26.308 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 14:07:26.308439 | orchestrator | 14:07:26.308 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.308486 | orchestrator | 14:07:26.308 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.308564 | orchestrator | 14:07:26.308 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.308612 | orchestrator | 14:07:26.308 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.308657 | orchestrator | 14:07:26.308 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.308706 | orchestrator | 14:07:26.308 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 14:07:26.308751 | orchestrator | 14:07:26.308 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.308779 | orchestrator | 14:07:26.308 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.308813 | orchestrator | 14:07:26.308 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.308846 | orchestrator | 14:07:26.308 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.308867 | orchestrator | 14:07:26.308 STDOUT terraform:  } 2025-08-29 14:07:26.308919 | orchestrator | 14:07:26.308 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 14:07:26.308968 | orchestrator | 14:07:26.308 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.309009 | orchestrator | 14:07:26.308 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.309039 | orchestrator | 14:07:26.309 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.309081 | orchestrator | 14:07:26.309 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.309125 | orchestrator | 14:07:26.309 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.309169 | orchestrator | 14:07:26.309 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 14:07:26.309213 | orchestrator | 14:07:26.309 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.309242 | orchestrator | 14:07:26.309 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.309296 | orchestrator | 14:07:26.309 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.309329 | orchestrator | 14:07:26.309 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.309350 | orchestrator | 14:07:26.309 STDOUT terraform:  } 2025-08-29 14:07:26.309401 | orchestrator | 14:07:26.309 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 14:07:26.309450 | orchestrator | 14:07:26.309 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.309501 | orchestrator | 14:07:26.309 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.309544 | orchestrator | 14:07:26.309 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.309590 | orchestrator | 14:07:26.309 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.309648 | orchestrator | 14:07:26.309 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.309693 | orchestrator | 14:07:26.309 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 14:07:26.309756 | orchestrator | 14:07:26.309 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.309793 | orchestrator | 14:07:26.309 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.309831 | orchestrator | 14:07:26.309 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.309864 | orchestrator | 14:07:26.309 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.309886 | orchestrator | 14:07:26.309 STDOUT terraform:  } 2025-08-29 14:07:26.309937 | orchestrator | 14:07:26.309 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 14:07:26.309987 | orchestrator | 14:07:26.309 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.310047 | orchestrator | 14:07:26.309 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.310081 | orchestrator | 14:07:26.310 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.310125 | orchestrator | 14:07:26.310 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.310167 | orchestrator | 14:07:26.310 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.310214 | orchestrator | 14:07:26.310 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 14:07:26.310259 | orchestrator | 14:07:26.310 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.310288 | orchestrator | 14:07:26.310 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.310341 | orchestrator | 14:07:26.310 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.310375 | orchestrator | 14:07:26.310 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.310398 | orchestrator | 14:07:26.310 STDOUT terraform:  } 2025-08-29 14:07:26.310568 | orchestrator | 14:07:26.310 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 14:07:26.310627 | orchestrator | 14:07:26.310 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 14:07:26.310671 | orchestrator | 14:07:26.310 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.310715 | orchestrator | 14:07:26.310 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.310767 | orchestrator | 14:07:26.310 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.310810 | orchestrator | 14:07:26.310 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.310841 | orchestrator | 14:07:26.310 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.310931 | orchestrator | 14:07:26.310 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.310973 | orchestrator | 14:07:26.310 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.311018 | orchestrator | 14:07:26.310 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.311082 | orchestrator | 14:07:26.311 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 14:07:26.311128 | orchestrator | 14:07:26.311 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.311184 | orchestrator | 14:07:26.311 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.311246 | orchestrator | 14:07:26.311 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.311304 | orchestrator | 14:07:26.311 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.311347 | orchestrator | 14:07:26.311 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.311380 | orchestrator | 14:07:26.311 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.311418 | orchestrator | 14:07:26.311 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 14:07:26.311449 | orchestrator | 14:07:26.311 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.311490 | orchestrator | 14:07:26.311 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.311545 | orchestrator | 14:07:26.311 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.311576 | orchestrator | 14:07:26.311 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.311619 | orchestrator | 14:07:26.311 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.311656 | orchestrator | 14:07:26.311 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 14:07:26.311683 | orchestrator | 14:07:26.311 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.311717 | orchestrator | 14:07:26.311 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.311752 | orchestrator | 14:07:26.311 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.311837 | orchestrator | 14:07:26.311 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.311898 | orchestrator | 14:07:26.311 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.311958 | orchestrator | 14:07:26.311 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.312019 | orchestrator | 14:07:26.311 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.312045 | orchestrator | 14:07:26.312 STDOUT terraform:  } 2025-08-29 14:07:26.312068 | orchestrator | 14:07:26.312 STDOUT terraform:  + network { 2025-08-29 14:07:26.312096 | orchestrator | 14:07:26.312 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.312134 | orchestrator | 14:07:26.312 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.312175 | orchestrator | 14:07:26.312 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.312216 | orchestrator | 14:07:26.312 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.312316 | orchestrator | 14:07:26.312 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.312355 | orchestrator | 14:07:26.312 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.312393 | orchestrator | 14:07:26.312 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.312415 | orchestrator | 14:07:26.312 STDOUT terraform:  } 2025-08-29 14:07:26.312435 | orchestrator | 14:07:26.312 STDOUT terraform:  } 2025-08-29 14:07:26.312485 | orchestrator | 14:07:26.312 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 14:07:26.312566 | orchestrator | 14:07:26.312 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.312610 | orchestrator | 14:07:26.312 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.312652 | orchestrator | 14:07:26.312 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.312696 | orchestrator | 14:07:26.312 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.312737 | orchestrator | 14:07:26.312 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.312768 | orchestrator | 14:07:26.312 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.312796 | orchestrator | 14:07:26.312 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.312840 | orchestrator | 14:07:26.312 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.312881 | orchestrator | 14:07:26.312 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.312917 | orchestrator | 14:07:26.312 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.312979 | orchestrator | 14:07:26.312 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.313021 | orchestrator | 14:07:26.312 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.313064 | orchestrator | 14:07:26.313 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.313105 | orchestrator | 14:07:26.313 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.313150 | orchestrator | 14:07:26.313 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.313185 | orchestrator | 14:07:26.313 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.313224 | orchestrator | 14:07:26.313 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 14:07:26.313255 | orchestrator | 14:07:26.313 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.313297 | orchestrator | 14:07:26.313 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.313338 | orchestrator | 14:07:26.313 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.313369 | orchestrator | 14:07:26.313 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.313434 | orchestrator | 14:07:26.313 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.313492 | orchestrator | 14:07:26.313 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.313532 | orchestrator | 14:07:26.313 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.313572 | orchestrator | 14:07:26.313 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.313608 | orchestrator | 14:07:26.313 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.313646 | orchestrator | 14:07:26.313 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.313680 | orchestrator | 14:07:26.313 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.313717 | orchestrator | 14:07:26.313 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.313762 | orchestrator | 14:07:26.313 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.313782 | orchestrator | 14:07:26.313 STDOUT terraform:  } 2025-08-29 14:07:26.313804 | orchestrator | 14:07:26.313 STDOUT terraform:  + network { 2025-08-29 14:07:26.313830 | orchestrator | 14:07:26.313 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.313875 | orchestrator | 14:07:26.313 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.313913 | orchestrator | 14:07:26.313 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.313950 | orchestrator | 14:07:26.313 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.313987 | orchestrator | 14:07:26.313 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.314047 | orchestrator | 14:07:26.313 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.314087 | orchestrator | 14:07:26.314 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.314111 | orchestrator | 14:07:26.314 STDOUT terraform:  } 2025-08-29 14:07:26.314156 | orchestrator | 14:07:26.314 STDOUT terraform:  } 2025-08-29 14:07:26.314209 | orchestrator | 14:07:26.314 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 14:07:26.314257 | orchestrator | 14:07:26.314 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.314299 | orchestrator | 14:07:26.314 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.314340 | orchestrator | 14:07:26.314 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.314381 | orchestrator | 14:07:26.314 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.314423 | orchestrator | 14:07:26.314 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.314456 | orchestrator | 14:07:26.314 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.314484 | orchestrator | 14:07:26.314 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.314548 | orchestrator | 14:07:26.314 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.314609 | orchestrator | 14:07:26.314 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.314646 | orchestrator | 14:07:26.314 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.314678 | orchestrator | 14:07:26.314 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.314718 | orchestrator | 14:07:26.314 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.314798 | orchestrator | 14:07:26.314 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.314842 | orchestrator | 14:07:26.314 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.314885 | orchestrator | 14:07:26.314 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.314918 | orchestrator | 14:07:26.314 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.314956 | orchestrator | 14:07:26.314 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 14:07:26.314988 | orchestrator | 14:07:26.314 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.315030 | orchestrator | 14:07:26.314 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.315070 | orchestrator | 14:07:26.315 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.315100 | orchestrator | 14:07:26.315 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.315145 | orchestrator | 14:07:26.315 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.315201 | orchestrator | 14:07:26.315 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.315226 | orchestrator | 14:07:26.315 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.315269 | orchestrator | 14:07:26.315 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.315318 | orchestrator | 14:07:26.315 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.315355 | orchestrator | 14:07:26.315 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.315388 | orchestrator | 14:07:26.315 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.315431 | orchestrator | 14:07:26.315 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.315486 | orchestrator | 14:07:26.315 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.315510 | orchestrator | 14:07:26.315 STDOUT terraform:  } 2025-08-29 14:07:26.315578 | orchestrator | 14:07:26.315 STDOUT terraform:  + network { 2025-08-29 14:07:26.315608 | orchestrator | 14:07:26.315 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.315646 | orchestrator | 14:07:26.315 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.315683 | orchestrator | 14:07:26.315 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.315722 | orchestrator | 14:07:26.315 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.315760 | orchestrator | 14:07:26.315 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.315797 | orchestrator | 14:07:26.315 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.315834 | orchestrator | 14:07:26.315 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.315855 | orchestrator | 14:07:26.315 STDOUT terraform:  } 2025-08-29 14:07:26.315876 | orchestrator | 14:07:26.315 STDOUT terraform:  } 2025-08-29 14:07:26.315925 | orchestrator | 14:07:26.315 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 14:07:26.316001 | orchestrator | 14:07:26.315 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.316051 | orchestrator | 14:07:26.316 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.316106 | orchestrator | 14:07:26.316 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.316176 | orchestrator | 14:07:26.316 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.316252 | orchestrator | 14:07:26.316 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.316306 | orchestrator | 14:07:26.316 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.316360 | orchestrator | 14:07:26.316 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.316438 | orchestrator | 14:07:26.316 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.316508 | orchestrator | 14:07:26.316 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.316598 | orchestrator | 14:07:26.316 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.316631 | orchestrator | 14:07:26.316 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.316675 | orchestrator | 14:07:26.316 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.316717 | orchestrator | 14:07:26.316 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.316763 | orchestrator | 14:07:26.316 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.316829 | orchestrator | 14:07:26.316 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.316878 | orchestrator | 14:07:26.316 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.316919 | orchestrator | 14:07:26.316 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 14:07:26.316951 | orchestrator | 14:07:26.316 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.316996 | orchestrator | 14:07:26.316 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.317063 | orchestrator | 14:07:26.317 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.317099 | orchestrator | 14:07:26.317 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.317147 | orchestrator | 14:07:26.317 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.317203 | orchestrator | 14:07:26.317 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.317227 | orchestrator | 14:07:26.317 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.317259 | orchestrator | 14:07:26.317 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.317323 | orchestrator | 14:07:26.317 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.317361 | orchestrator | 14:07:26.317 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.317395 | orchestrator | 14:07:26.317 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.317434 | orchestrator | 14:07:26.317 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.317530 | orchestrator | 14:07:26.317 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.317564 | orchestrator | 14:07:26.317 STDOUT terraform:  } 2025-08-29 14:07:26.317587 | orchestrator | 14:07:26.317 STDOUT terraform:  + network { 2025-08-29 14:07:26.317615 | orchestrator | 14:07:26.317 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.317652 | orchestrator | 14:07:26.317 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.317691 | orchestrator | 14:07:26.317 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.317730 | orchestrator | 14:07:26.317 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.317796 | orchestrator | 14:07:26.317 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.317838 | orchestrator | 14:07:26.317 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.317878 | orchestrator | 14:07:26.317 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.317902 | orchestrator | 14:07:26.317 STDOUT terraform:  } 2025-08-29 14:07:26.317926 | orchestrator | 14:07:26.317 STDOUT terraform:  } 2025-08-29 14:07:26.318009 | orchestrator | 14:07:26.317 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 14:07:26.318077 | orchestrator | 14:07:26.318 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.318120 | orchestrator | 14:07:26.318 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.318162 | orchestrator | 14:07:26.318 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.318206 | orchestrator | 14:07:26.318 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.318248 | orchestrator | 14:07:26.318 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.318282 | orchestrator | 14:07:26.318 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.318311 | orchestrator | 14:07:26.318 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.318355 | orchestrator | 14:07:26.318 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.318396 | orchestrator | 14:07:26.318 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.318432 | orchestrator | 14:07:26.318 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.318489 | orchestrator | 14:07:26.318 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.318562 | orchestrator | 14:07:26.318 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.318607 | orchestrator | 14:07:26.318 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318650 | orchestrator | 14:07:26.318 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.318692 | orchestrator | 14:07:26.318 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.326065 | orchestrator | 14:07:26.318 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.326096 | orchestrator | 14:07:26.318 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 14:07:26.326101 | orchestrator | 14:07:26.318 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.326112 | orchestrator | 14:07:26.318 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326116 | orchestrator | 14:07:26.318 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.326120 | orchestrator | 14:07:26.318 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.326124 | orchestrator | 14:07:26.318 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.326127 | orchestrator | 14:07:26.318 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.326136 | orchestrator | 14:07:26.318 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.326140 | orchestrator | 14:07:26.319 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.326143 | orchestrator | 14:07:26.319 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.326147 | orchestrator | 14:07:26.319 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.326151 | orchestrator | 14:07:26.319 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.326154 | orchestrator | 14:07:26.319 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.326158 | orchestrator | 14:07:26.319 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.326162 | orchestrator | 14:07:26.319 STDOUT terraform:  } 2025-08-29 14:07:26.326166 | orchestrator | 14:07:26.319 STDOUT terraform:  + network { 2025-08-29 14:07:26.326170 | orchestrator | 14:07:26.319 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.326173 | orchestrator | 14:07:26.319 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.326177 | orchestrator | 14:07:26.319 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.326181 | orchestrator | 14:07:26.319 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.326185 | orchestrator | 14:07:26.319 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.326188 | orchestrator | 14:07:26.319 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.326192 | orchestrator | 14:07:26.319 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.326196 | orchestrator | 14:07:26.319 STDOUT terraform:  } 2025-08-29 14:07:26.326200 | orchestrator | 14:07:26.319 STDOUT terraform:  } 2025-08-29 14:07:26.326203 | orchestrator | 14:07:26.319 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 14:07:26.326207 | orchestrator | 14:07:26.319 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.326211 | orchestrator | 14:07:26.319 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.326215 | orchestrator | 14:07:26.319 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.326218 | orchestrator | 14:07:26.319 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.326222 | orchestrator | 14:07:26.319 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.326225 | orchestrator | 14:07:26.319 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.326233 | orchestrator | 14:07:26.319 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.326236 | orchestrator | 14:07:26.319 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.326240 | orchestrator | 14:07:26.319 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.326244 | orchestrator | 14:07:26.319 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.326247 | orchestrator | 14:07:26.319 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.326259 | orchestrator | 14:07:26.319 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.326263 | orchestrator | 14:07:26.319 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326266 | orchestrator | 14:07:26.319 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.326270 | orchestrator | 14:07:26.319 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.326274 | orchestrator | 14:07:26.319 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.326277 | orchestrator | 14:07:26.319 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 14:07:26.326281 | orchestrator | 14:07:26.319 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.326285 | orchestrator | 14:07:26.319 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326288 | orchestrator | 14:07:26.319 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.326292 | orchestrator | 14:07:26.319 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.326296 | orchestrator | 14:07:26.319 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.326299 | orchestrator | 14:07:26.320 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.326303 | orchestrator | 14:07:26.320 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.326307 | orchestrator | 14:07:26.320 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.326310 | orchestrator | 14:07:26.320 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.326314 | orchestrator | 14:07:26.320 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.326318 | orchestrator | 14:07:26.320 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.326322 | orchestrator | 14:07:26.320 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.326325 | orchestrator | 14:07:26.320 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.326329 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.326333 | orchestrator | 14:07:26.320 STDOUT terraform:  + network { 2025-08-29 14:07:26.326336 | orchestrator | 14:07:26.320 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.326340 | orchestrator | 14:07:26.320 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.326344 | orchestrator | 14:07:26.320 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.326347 | orchestrator | 14:07:26.320 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.326356 | orchestrator | 14:07:26.320 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.326360 | orchestrator | 14:07:26.320 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.326364 | orchestrator | 14:07:26.320 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.326367 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.326371 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.326375 | orchestrator | 14:07:26.320 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 14:07:26.326379 | orchestrator | 14:07:26.320 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.326382 | orchestrator | 14:07:26.320 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.326386 | orchestrator | 14:07:26.320 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.326397 | orchestrator | 14:07:26.320 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.326401 | orchestrator | 14:07:26.320 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.326405 | orchestrator | 14:07:26.320 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.326426 | orchestrator | 14:07:26.320 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.326436 | orchestrator | 14:07:26.320 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.326440 | orchestrator | 14:07:26.320 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.326447 | orchestrator | 14:07:26.320 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.326453 | orchestrator | 14:07:26.320 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.326457 | orchestrator | 14:07:26.320 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.326461 | orchestrator | 14:07:26.320 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326465 | orchestrator | 14:07:26.320 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.326468 | orchestrator | 14:07:26.320 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.326472 | orchestrator | 14:07:26.320 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.326476 | orchestrator | 14:07:26.320 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 14:07:26.326480 | orchestrator | 14:07:26.321 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.326483 | orchestrator | 14:07:26.321 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326487 | orchestrator | 14:07:26.321 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.326491 | orchestrator | 14:07:26.321 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.326495 | orchestrator | 14:07:26.321 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.326498 | orchestrator | 14:07:26.321 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.326502 | orchestrator | 14:07:26.321 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.326510 | orchestrator | 14:07:26.321 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.326524 | orchestrator | 14:07:26.321 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.326527 | orchestrator | 14:07:26.321 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.326531 | orchestrator | 14:07:26.321 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.326535 | orchestrator | 14:07:26.321 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.326539 | orchestrator | 14:07:26.321 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.326543 | orchestrator | 14:07:26.321 STDOUT terraform:  } 2025-08-29 14:07:26.326546 | orchestrator | 14:07:26.321 STDOUT terraform:  + network { 2025-08-29 14:07:26.326550 | orchestrator | 14:07:26.321 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.326554 | orchestrator | 14:07:26.321 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.326558 | orchestrator | 14:07:26.321 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.326561 | orchestrator | 14:07:26.321 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.326565 | orchestrator | 14:07:26.321 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.326569 | orchestrator | 14:07:26.321 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.326572 | orchestrator | 14:07:26.321 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.326576 | orchestrator | 14:07:26.321 STDOUT terraform:  } 2025-08-29 14:07:26.326580 | orchestrator | 14:07:26.321 STDOUT terraform:  } 2025-08-29 14:07:26.326584 | orchestrator | 14:07:26.321 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 14:07:26.326587 | orchestrator | 14:07:26.321 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 14:07:26.326591 | orchestrator | 14:07:26.321 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 14:07:26.326595 | orchestrator | 14:07:26.321 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326599 | orchestrator | 14:07:26.321 STDOUT terraform:  + name = "testbed" 2025-08-29 14:07:26.326607 | orchestrator | 14:07:26.321 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:07:26.326611 | orchestrator | 14:07:26.321 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 14:07:26.326615 | orchestrator | 14:07:26.321 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326621 | orchestrator | 14:07:26.321 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 14:07:26.326624 | orchestrator | 14:07:26.321 STDOUT terraform:  } 2025-08-29 14:07:26.326628 | orchestrator | 14:07:26.321 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 14:07:26.326632 | orchestrator | 14:07:26.321 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326636 | orchestrator | 14:07:26.321 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326640 | orchestrator | 14:07:26.321 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326647 | orchestrator | 14:07:26.321 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326651 | orchestrator | 14:07:26.321 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326654 | orchestrator | 14:07:26.321 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326658 | orchestrator | 14:07:26.321 STDOUT terraform:  } 2025-08-29 14:07:26.326662 | orchestrator | 14:07:26.321 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 14:07:26.326666 | orchestrator | 14:07:26.322 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326669 | orchestrator | 14:07:26.322 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326673 | orchestrator | 14:07:26.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326677 | orchestrator | 14:07:26.322 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326680 | orchestrator | 14:07:26.322 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326684 | orchestrator | 14:07:26.322 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326688 | orchestrator | 14:07:26.322 STDOUT terraform:  } 2025-08-29 14:07:26.326692 | orchestrator | 14:07:26.322 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 14:07:26.326695 | orchestrator | 14:07:26.322 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326699 | orchestrator | 14:07:26.322 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326703 | orchestrator | 14:07:26.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326707 | orchestrator | 14:07:26.322 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326710 | orchestrator | 14:07:26.322 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326714 | orchestrator | 14:07:26.322 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326718 | orchestrator | 14:07:26.322 STDOUT terraform:  } 2025-08-29 14:07:26.326721 | orchestrator | 14:07:26.322 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 14:07:26.326725 | orchestrator | 14:07:26.322 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326729 | orchestrator | 14:07:26.322 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326733 | orchestrator | 14:07:26.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326736 | orchestrator | 14:07:26.322 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326740 | orchestrator | 14:07:26.322 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326744 | orchestrator | 14:07:26.322 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326748 | orchestrator | 14:07:26.322 STDOUT terraform:  } 2025-08-29 14:07:26.326754 | orchestrator | 14:07:26.322 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 14:07:26.326761 | orchestrator | 14:07:26.322 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326764 | orchestrator | 14:07:26.322 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326770 | orchestrator | 14:07:26.322 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326774 | orchestrator | 14:07:26.322 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326778 | orchestrator | 14:07:26.322 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326782 | orchestrator | 14:07:26.322 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326786 | orchestrator | 14:07:26.322 STDOUT terraform:  } 2025-08-29 14:07:26.326789 | orchestrator | 14:07:26.322 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 14:07:26.326793 | orchestrator | 14:07:26.322 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326797 | orchestrator | 14:07:26.322 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326801 | orchestrator | 14:07:26.323 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326804 | orchestrator | 14:07:26.323 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326808 | orchestrator | 14:07:26.323 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326812 | orchestrator | 14:07:26.323 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326815 | orchestrator | 14:07:26.323 STDOUT terraform:  } 2025-08-29 14:07:26.326819 | orchestrator | 14:07:26.323 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 14:07:26.326823 | orchestrator | 14:07:26.323 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326827 | orchestrator | 14:07:26.323 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326830 | orchestrator | 14:07:26.323 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326834 | orchestrator | 14:07:26.323 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326838 | orchestrator | 14:07:26.323 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326842 | orchestrator | 14:07:26.323 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326845 | orchestrator | 14:07:26.323 STDOUT terraform:  } 2025-08-29 14:07:26.326849 | orchestrator | 14:07:26.323 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 14:07:26.326853 | orchestrator | 14:07:26.323 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326857 | orchestrator | 14:07:26.323 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326860 | orchestrator | 14:07:26.323 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326864 | orchestrator | 14:07:26.323 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326868 | orchestrator | 14:07:26.323 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326875 | orchestrator | 14:07:26.323 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326879 | orchestrator | 14:07:26.323 STDOUT terraform:  } 2025-08-29 14:07:26.326882 | orchestrator | 14:07:26.323 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 14:07:26.326886 | orchestrator | 14:07:26.323 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.326890 | orchestrator | 14:07:26.323 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.326894 | orchestrator | 14:07:26.323 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326897 | orchestrator | 14:07:26.323 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.326904 | orchestrator | 14:07:26.323 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326907 | orchestrator | 14:07:26.323 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.326911 | orchestrator | 14:07:26.323 STDOUT terraform:  } 2025-08-29 14:07:26.326917 | orchestrator | 14:07:26.323 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 14:07:26.326922 | orchestrator | 14:07:26.323 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 14:07:26.326926 | orchestrator | 14:07:26.323 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:07:26.326930 | orchestrator | 14:07:26.323 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 14:07:26.326933 | orchestrator | 14:07:26.323 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326937 | orchestrator | 14:07:26.323 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:07:26.326941 | orchestrator | 14:07:26.324 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326944 | orchestrator | 14:07:26.324 STDOUT terraform:  } 2025-08-29 14:07:26.326948 | orchestrator | 14:07:26.324 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 14:07:26.326952 | orchestrator | 14:07:26.324 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 14:07:26.326956 | orchestrator | 14:07:26.324 STDOUT terraform:  + address = (known after apply) 2025-08-29 14:07:26.326960 | orchestrator | 14:07:26.324 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.326963 | orchestrator | 14:07:26.324 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:07:26.326967 | orchestrator | 14:07:26.324 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.326971 | orchestrator | 14:07:26.324 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:07:26.326975 | orchestrator | 14:07:26.324 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.326978 | orchestrator | 14:07:26.324 STDOUT terraform:  + pool = "public" 2025-08-29 14:07:26.326982 | orchestrator | 14:07:26.324 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:07:26.326986 | orchestrator | 14:07:26.324 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.326989 | orchestrator | 14:07:26.324 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.326996 | orchestrator | 14:07:26.324 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.326999 | orchestrator | 14:07:26.324 STDOUT terraform:  } 2025-08-29 14:07:26.327003 | orchestrator | 14:07:26.324 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 14:07:26.327007 | orchestrator | 14:07:26.324 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 14:07:26.327011 | orchestrator | 14:07:26.324 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.327014 | orchestrator | 14:07:26.324 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.327018 | orchestrator | 14:07:26.324 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:07:26.327022 | orchestrator | 14:07:26.324 STDOUT terraform:  + "nova", 2025-08-29 14:07:26.327026 | orchestrator | 14:07:26.324 STDOUT terraform:  ] 2025-08-29 14:07:26.327029 | orchestrator | 14:07:26.324 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:07:26.327033 | orchestrator | 14:07:26.324 STDOUT terraform:  + external = (known after apply) 2025-08-29 14:07:26.327037 | orchestrator | 14:07:26.324 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.327040 | orchestrator | 14:07:26.324 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 14:07:26.327044 | orchestrator | 14:07:26.324 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 14:07:26.327050 | orchestrator | 14:07:26.324 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.327054 | orchestrator | 14:07:26.324 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.327058 | orchestrator | 14:07:26.324 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.327062 | orchestrator | 14:07:26.325 STDOUT terraform:  + shared = (known after apply) 2025-08-29 14:07:26.327065 | orchestrator | 14:07:26.325 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.327069 | orchestrator | 14:07:26.325 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 14:07:26.327073 | orchestrator | 14:07:26.325 STDOUT terraform:  + segments (known after apply) 2025-08-29 14:07:26.327076 | orchestrator | 14:07:26.325 STDOUT terraform:  } 2025-08-29 14:07:26.327080 | orchestrator | 14:07:26.325 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 14:07:26.327084 | orchestrator | 14:07:26.325 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 14:07:26.327087 | orchestrator | 14:07:26.325 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.327091 | orchestrator | 14:07:26.325 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.327095 | orchestrator | 14:07:26.325 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.327098 | orchestrator | 14:07:26.325 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.327124 | orchestrator | 14:07:26.325 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.327132 | orchestrator | 14:07:26.325 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.327135 | orchestrator | 14:07:26.325 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.327139 | orchestrator | 14:07:26.325 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.327143 | orchestrator | 14:07:26.325 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.327147 | orchestrator | 14:07:26.325 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.327150 | orchestrator | 14:07:26.325 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.327154 | orchestrator | 14:07:26.325 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.327158 | orchestrator | 14:07:26.325 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.327161 | orchestrator | 14:07:26.325 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.327165 | orchestrator | 14:07:26.325 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.327169 | orchestrator | 14:07:26.325 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.327172 | orchestrator | 14:07:26.325 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.327176 | orchestrator | 14:07:26.325 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.327180 | orchestrator | 14:07:26.325 STDOUT terraform:  } 2025-08-29 14:07:26.327183 | orchestrator | 14:07:26.325 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.327187 | orchestrator | 14:07:26.325 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.327191 | orchestrator | 14:07:26.325 STDOUT terraform:  } 2025-08-29 14:07:26.327195 | orchestrator | 14:07:26.325 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.327198 | orchestrator | 14:07:26.325 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.327202 | orchestrator | 14:07:26.325 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 14:07:26.327235 | orchestrator | 14:07:26.325 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.327262 | orchestrator | 14:07:26.327 STDOUT terraform:  } 2025-08-29 14:07:26.327294 | orchestrator | 14:07:26.327 STDOUT terraform:  } 2025-08-29 14:07:26.327352 | orchestrator | 14:07:26.327 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 14:07:26.327404 | orchestrator | 14:07:26.327 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.327452 | orchestrator | 14:07:26.327 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.327494 | orchestrator | 14:07:26.327 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.327618 | orchestrator | 14:07:26.327 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.327668 | orchestrator | 14:07:26.327 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.327718 | orchestrator | 14:07:26.327 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.327768 | orchestrator | 14:07:26.327 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.327812 | orchestrator | 14:07:26.327 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.327864 | orchestrator | 14:07:26.327 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.327911 | orchestrator | 14:07:26.327 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.327954 | orchestrator | 14:07:26.327 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.327997 | orchestrator | 14:07:26.327 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.328043 | orchestrator | 14:07:26.328 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.328086 | orchestrator | 14:07:26.328 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.328129 | orchestrator | 14:07:26.328 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.328171 | orchestrator | 14:07:26.328 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.328222 | orchestrator | 14:07:26.328 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.328276 | orchestrator | 14:07:26.328 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.328314 | orchestrator | 14:07:26.328 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.328342 | orchestrator | 14:07:26.328 STDOUT terraform:  } 2025-08-29 14:07:26.328369 | orchestrator | 14:07:26.328 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.328406 | orchestrator | 14:07:26.328 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.328426 | orchestrator | 14:07:26.328 STDOUT terraform:  } 2025-08-29 14:07:26.328452 | orchestrator | 14:07:26.328 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.328487 | orchestrator | 14:07:26.328 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.328525 | orchestrator | 14:07:26.328 STDOUT terraform:  } 2025-08-29 14:07:26.328555 | orchestrator | 14:07:26.328 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.328593 | orchestrator | 14:07:26.328 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.328615 | orchestrator | 14:07:26.328 STDOUT terraform:  } 2025-08-29 14:07:26.328653 | orchestrator | 14:07:26.328 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.328676 | orchestrator | 14:07:26.328 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.328708 | orchestrator | 14:07:26.328 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 14:07:26.328745 | orchestrator | 14:07:26.328 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.328793 | orchestrator | 14:07:26.328 STDOUT terraform:  } 2025-08-29 14:07:26.328816 | orchestrator | 14:07:26.328 STDOUT terraform:  } 2025-08-29 14:07:26.328868 | orchestrator | 14:07:26.328 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 14:07:26.328924 | orchestrator | 14:07:26.328 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.328975 | orchestrator | 14:07:26.328 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.329020 | orchestrator | 14:07:26.328 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.329061 | orchestrator | 14:07:26.329 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.329109 | orchestrator | 14:07:26.329 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.329154 | orchestrator | 14:07:26.329 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.329195 | orchestrator | 14:07:26.329 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.329237 | orchestrator | 14:07:26.329 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.329284 | orchestrator | 14:07:26.329 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.329332 | orchestrator | 14:07:26.329 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.329375 | orchestrator | 14:07:26.329 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.329417 | orchestrator | 14:07:26.329 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.329480 | orchestrator | 14:07:26.329 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.329562 | orchestrator | 14:07:26.329 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.329615 | orchestrator | 14:07:26.329 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.329657 | orchestrator | 14:07:26.329 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.329705 | orchestrator | 14:07:26.329 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.329736 | orchestrator | 14:07:26.329 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.329779 | orchestrator | 14:07:26.329 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.329801 | orchestrator | 14:07:26.329 STDOUT terraform:  } 2025-08-29 14:07:26.329827 | orchestrator | 14:07:26.329 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.329867 | orchestrator | 14:07:26.329 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.329888 | orchestrator | 14:07:26.329 STDOUT terraform:  } 2025-08-29 14:07:26.329915 | orchestrator | 14:07:26.329 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.329978 | orchestrator | 14:07:26.329 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.330009 | orchestrator | 14:07:26.329 STDOUT terraform:  } 2025-08-29 14:07:28.604411 | orchestrator | 14:07:26.330 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.604487 | orchestrator | 14:07:26.330 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:28.604501 | orchestrator | 14:07:26.330 STDOUT terraform:  } 2025-08-29 14:07:28.604537 | orchestrator | 14:07:26.330 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:28.604549 | orchestrator | 14:07:26.330 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:28.604591 | orchestrator | 14:07:26.330 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 14:07:28.604602 | orchestrator | 14:07:26.330 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:28.604613 | orchestrator | 14:07:26.330 STDOUT terraform:  } 2025-08-29 14:07:28.604627 | orchestrator | 14:07:26.330 STDOUT terraform:  } 2025-08-29 14:07:28.604646 | orchestrator | 14:07:26.330 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 14:07:28.604663 | orchestrator | 14:07:26.330 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:28.604674 | orchestrator | 14:07:26.330 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:28.604685 | orchestrator | 14:07:26.330 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:28.604712 | orchestrator | 14:07:26.330 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:28.604723 | orchestrator | 14:07:26.330 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.604734 | orchestrator | 14:07:26.330 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:28.604744 | orchestrator | 14:07:26.330 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:28.604759 | orchestrator | 14:07:26.330 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:28.604770 | orchestrator | 14:07:26.330 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:28.604780 | orchestrator | 14:07:26.330 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.604791 | orchestrator | 14:07:26.330 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:28.604801 | orchestrator | 14:07:26.330 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:28.604812 | orchestrator | 14:07:26.330 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:28.604822 | orchestrator | 14:07:26.330 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:28.604832 | orchestrator | 14:07:26.330 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.604843 | orchestrator | 14:07:26.330 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:28.604853 | orchestrator | 14:07:26.330 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.604864 | orchestrator | 14:07:26.330 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.604874 | orchestrator | 14:07:26.330 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:28.604885 | orchestrator | 14:07:26.330 STDOUT terraform:  } 2025-08-29 14:07:28.604895 | orchestrator | 14:07:26.330 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.604906 | orchestrator | 14:07:26.330 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:28.604916 | orchestrator | 14:07:26.330 STDOUT terraform:  } 2025-08-29 14:07:28.605076 | orchestrator | 14:07:26.330 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.605442 | orchestrator | 14:07:26.330 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:28.605484 | orchestrator | 14:07:26.330 STDOUT terraform:  } 2025-08-29 14:07:28.605496 | orchestrator | 14:07:26.330 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.605506 | orchestrator | 14:07:26.330 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:28.605737 | orchestrator | 14:07:26.331 STDOUT terraform:  } 2025-08-29 14:07:28.605770 | orchestrator | 14:07:26.331 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:28.605802 | orchestrator | 14:07:26.331 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:28.605813 | orchestrator | 14:07:26.331 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 14:07:28.605824 | orchestrator | 14:07:26.331 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:28.605834 | orchestrator | 14:07:26.331 STDOUT terraform:  } 2025-08-29 14:07:28.605845 | orchestrator | 14:07:26.331 STDOUT terraform:  } 2025-08-29 14:07:28.605855 | orchestrator | 14:07:26.331 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 14:07:28.605867 | orchestrator | 14:07:26.331 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:28.605878 | orchestrator | 14:07:26.331 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:28.605889 | orchestrator | 14:07:26.331 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:28.605899 | orchestrator | 14:07:26.331 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:28.605910 | orchestrator | 14:07:26.331 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.605920 | orchestrator | 14:07:26.331 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:28.605931 | orchestrator | 14:07:26.331 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:28.605956 | orchestrator | 14:07:26.331 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:28.605967 | orchestrator | 14:07:26.331 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:28.606136 | orchestrator | 14:07:26.331 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.606150 | orchestrator | 14:07:26.331 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:28.606160 | orchestrator | 14:07:26.331 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:28.606171 | orchestrator | 14:07:26.331 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:28.606181 | orchestrator | 14:07:26.331 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:28.606192 | orchestrator | 14:07:26.331 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.606202 | orchestrator | 14:07:26.331 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:28.606213 | orchestrator | 14:07:26.331 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.606223 | orchestrator | 14:07:26.331 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606234 | orchestrator | 14:07:26.331 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:28.606254 | orchestrator | 14:07:26.331 STDOUT terraform:  } 2025-08-29 14:07:28.606264 | orchestrator | 14:07:26.331 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606275 | orchestrator | 14:07:26.331 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:28.606285 | orchestrator | 14:07:26.331 STDOUT terraform:  } 2025-08-29 14:07:28.606296 | orchestrator | 14:07:26.331 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606306 | orchestrator | 14:07:26.331 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:28.606317 | orchestrator | 14:07:26.331 STDOUT terraform:  } 2025-08-29 14:07:28.606327 | orchestrator | 14:07:26.331 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606337 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:28.606348 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606358 | orchestrator | 14:07:26.332 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:28.606369 | orchestrator | 14:07:26.332 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:28.606380 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 14:07:28.606401 | orchestrator | 14:07:26.332 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:28.606413 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606424 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606434 | orchestrator | 14:07:26.332 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 14:07:28.606445 | orchestrator | 14:07:26.332 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:28.606455 | orchestrator | 14:07:26.332 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:28.606466 | orchestrator | 14:07:26.332 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:28.606476 | orchestrator | 14:07:26.332 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:28.606487 | orchestrator | 14:07:26.332 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.606497 | orchestrator | 14:07:26.332 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:28.606508 | orchestrator | 14:07:26.332 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:28.606582 | orchestrator | 14:07:26.332 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:28.606594 | orchestrator | 14:07:26.332 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:28.606605 | orchestrator | 14:07:26.332 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.606615 | orchestrator | 14:07:26.332 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:28.606626 | orchestrator | 14:07:26.332 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:28.606636 | orchestrator | 14:07:26.332 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:28.606655 | orchestrator | 14:07:26.332 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:28.606666 | orchestrator | 14:07:26.332 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.606676 | orchestrator | 14:07:26.332 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:28.606687 | orchestrator | 14:07:26.332 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.606697 | orchestrator | 14:07:26.332 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606707 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:28.606718 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606728 | orchestrator | 14:07:26.332 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606739 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:28.606749 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606760 | orchestrator | 14:07:26.332 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606770 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:28.606781 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606791 | orchestrator | 14:07:26.332 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.606802 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:28.606812 | orchestrator | 14:07:26.332 STDOUT terraform:  } 2025-08-29 14:07:28.606823 | orchestrator | 14:07:26.332 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:28.606833 | orchestrator | 14:07:26.332 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:28.606844 | orchestrator | 14:07:26.332 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 14:07:28.606855 | orchestrator | 14:07:26.333 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:28.606865 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.606876 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.606895 | orchestrator | 14:07:26.333 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 14:07:28.606906 | orchestrator | 14:07:26.333 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:28.606962 | orchestrator | 14:07:26.333 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:28.606975 | orchestrator | 14:07:26.333 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:28.606986 | orchestrator | 14:07:26.333 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:28.606997 | orchestrator | 14:07:26.333 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.607007 | orchestrator | 14:07:26.333 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:28.607018 | orchestrator | 14:07:26.333 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:28.607028 | orchestrator | 14:07:26.333 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:28.607055 | orchestrator | 14:07:26.333 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:28.607066 | orchestrator | 14:07:26.333 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.607077 | orchestrator | 14:07:26.333 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:28.607088 | orchestrator | 14:07:26.333 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:28.607120 | orchestrator | 14:07:26.333 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:28.607132 | orchestrator | 14:07:26.333 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:28.607147 | orchestrator | 14:07:26.333 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.607158 | orchestrator | 14:07:26.333 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:28.607169 | orchestrator | 14:07:26.333 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.607179 | orchestrator | 14:07:26.333 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.607190 | orchestrator | 14:07:26.333 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:28.607200 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.607211 | orchestrator | 14:07:26.333 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.607222 | orchestrator | 14:07:26.333 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:28.607232 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.607242 | orchestrator | 14:07:26.333 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.607253 | orchestrator | 14:07:26.333 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:28.607263 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.607274 | orchestrator | 14:07:26.333 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:28.607284 | orchestrator | 14:07:26.333 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:28.607295 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.607306 | orchestrator | 14:07:26.333 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:28.607316 | orchestrator | 14:07:26.333 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:28.607327 | orchestrator | 14:07:26.333 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 14:07:28.607337 | orchestrator | 14:07:26.333 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:28.607348 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.607358 | orchestrator | 14:07:26.333 STDOUT terraform:  } 2025-08-29 14:07:28.607369 | orchestrator | 14:07:26.333 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 14:07:28.607380 | orchestrator | 14:07:26.334 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 14:07:28.607391 | orchestrator | 14:07:26.334 STDOUT terraform:  + force_destroy = false 2025-08-29 14:07:28.607409 | orchestrator | 14:07:26.334 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.607429 | orchestrator | 14:07:26.334 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:07:28.607439 | orchestrator | 14:07:26.334 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.607450 | orchestrator | 14:07:26.334 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 14:07:28.607461 | orchestrator | 14:07:26.334 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:28.607471 | orchestrator | 14:07:26.334 STDOUT terraform:  } 2025-08-29 14:07:28.607482 | orchestrator | 14:07:26.334 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 14:07:28.607493 | orchestrator | 14:07:26.334 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 14:07:28.607504 | orchestrator | 14:07:26.334 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:28.607536 | orchestrator | 14:07:26.334 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.607547 | orchestrator | 14:07:26.334 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:07:28.607562 | orchestrator | 14:07:26.334 STDOUT terraform:  + "nova", 2025-08-29 14:07:28.607573 | orchestrator | 14:07:26.334 STDOUT terraform:  ] 2025-08-29 14:07:28.607584 | orchestrator | 14:07:26.334 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 14:07:28.607595 | orchestrator | 14:07:26.334 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 14:07:28.607606 | orchestrator | 14:07:26.334 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 14:07:28.607621 | orchestrator | 14:07:26.334 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 14:07:28.607632 | orchestrator | 14:07:26.334 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.607642 | orchestrator | 14:07:26.334 STDOUT terraform:  + name = "testbed" 2025-08-29 14:07:28.607653 | orchestrator | 14:07:26.334 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.607664 | orchestrator | 14:07:26.334 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.607674 | orchestrator | 14:07:26.334 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 14:07:28.607685 | orchestrator | 14:07:26.334 STDOUT terraform:  } 2025-08-29 14:07:28.607696 | orchestrator | 14:07:26.334 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 14:07:28.607708 | orchestrator | 14:07:26.334 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 14:07:28.607719 | orchestrator | 14:07:26.334 STDOUT terraform:  + description = "ssh" 2025-08-29 14:07:28.607730 | orchestrator | 14:07:26.334 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.607740 | orchestrator | 14:07:26.334 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.607751 | orchestrator | 14:07:26.334 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.607761 | orchestrator | 14:07:26.334 STDOUT terraform:  + port_range_max = 22 2025-08-29 14:07:28.607772 | orchestrator | 14:07:26.334 STDOUT terraform:  + port_range_min = 22 2025-08-29 14:07:28.607790 | orchestrator | 14:07:26.334 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:07:28.607801 | orchestrator | 14:07:26.335 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.607812 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.607822 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.607833 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.607850 | orchestrator | 14:07:26.335 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.607862 | orchestrator | 14:07:26.335 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.607872 | orchestrator | 14:07:26.335 STDOUT terraform:  } 2025-08-29 14:07:28.607883 | orchestrator | 14:07:26.335 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 14:07:28.607894 | orchestrator | 14:07:26.335 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 14:07:28.607904 | orchestrator | 14:07:26.335 STDOUT terraform:  + description = "wireguard" 2025-08-29 14:07:28.607915 | orchestrator | 14:07:26.335 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.607925 | orchestrator | 14:07:26.335 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.607936 | orchestrator | 14:07:26.335 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.607946 | orchestrator | 14:07:26.335 STDOUT terraform:  + port_range_max = 51820 2025-08-29 14:07:28.607957 | orchestrator | 14:07:26.335 STDOUT terraform:  + port_range_min = 51820 2025-08-29 14:07:28.607967 | orchestrator | 14:07:26.335 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:07:28.607978 | orchestrator | 14:07:26.335 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.607988 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.607999 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.608009 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.608025 | orchestrator | 14:07:26.335 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.608036 | orchestrator | 14:07:26.335 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.608047 | orchestrator | 14:07:26.335 STDOUT terraform:  } 2025-08-29 14:07:28.608057 | orchestrator | 14:07:26.335 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 14:07:28.608068 | orchestrator | 14:07:26.335 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 14:07:28.608079 | orchestrator | 14:07:26.335 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.608089 | orchestrator | 14:07:26.335 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.608111 | orchestrator | 14:07:26.335 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.608122 | orchestrator | 14:07:26.335 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:07:28.608132 | orchestrator | 14:07:26.335 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.608143 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.608153 | orchestrator | 14:07:26.335 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.608164 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:07:28.608174 | orchestrator | 14:07:26.336 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.608185 | orchestrator | 14:07:26.336 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.608195 | orchestrator | 14:07:26.336 STDOUT terraform:  } 2025-08-29 14:07:28.608206 | orchestrator | 14:07:26.336 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 14:07:28.608217 | orchestrator | 14:07:26.336 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 14:07:28.608227 | orchestrator | 14:07:26.336 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.608238 | orchestrator | 14:07:26.336 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.608255 | orchestrator | 14:07:26.336 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.608266 | orchestrator | 14:07:26.336 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:07:28.608277 | orchestrator | 14:07:26.336 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.608287 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.608298 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.608308 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:07:28.608319 | orchestrator | 14:07:26.336 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.608329 | orchestrator | 14:07:26.336 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.608340 | orchestrator | 14:07:26.336 STDOUT terraform:  } 2025-08-29 14:07:28.608351 | orchestrator | 14:07:26.336 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 14:07:28.608361 | orchestrator | 14:07:26.336 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 14:07:28.608372 | orchestrator | 14:07:26.336 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.608382 | orchestrator | 14:07:26.336 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.608434 | orchestrator | 14:07:26.336 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.608446 | orchestrator | 14:07:26.336 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:07:28.608480 | orchestrator | 14:07:26.336 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.608505 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.608534 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.608545 | orchestrator | 14:07:26.336 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.608589 | orchestrator | 14:07:26.336 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.608600 | orchestrator | 14:07:26.336 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.608611 | orchestrator | 14:07:26.336 STDOUT terraform:  } 2025-08-29 14:07:28.608622 | orchestrator | 14:07:26.336 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 14:07:28.608633 | orchestrator | 14:07:26.336 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 14:07:28.608644 | orchestrator | 14:07:26.337 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.608654 | orchestrator | 14:07:26.337 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.608665 | orchestrator | 14:07:26.337 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.608676 | orchestrator | 14:07:26.337 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:07:28.608687 | orchestrator | 14:07:26.337 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.608697 | orchestrator | 14:07:26.337 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.608708 | orchestrator | 14:07:26.337 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.608718 | orchestrator | 14:07:26.337 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.608729 | orchestrator | 14:07:26.337 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.608739 | orchestrator | 14:07:26.337 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.608750 | orchestrator | 14:07:26.337 STDOUT terraform:  } 2025-08-29 14:07:28.608769 | orchestrator | 14:07:26.337 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 14:07:28.608780 | orchestrator | 14:07:26.337 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 14:07:28.608791 | orchestrator | 14:07:26.337 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.608801 | orchestrator | 14:07:26.337 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.608812 | orchestrator | 14:07:26.337 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.608822 | orchestrator | 14:07:26.337 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:07:28.608833 | orchestrator | 14:07:26.337 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.608844 | orchestrator | 14:07:26.337 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.608855 | orchestrator | 14:07:26.337 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.608872 | orchestrator | 14:07:26.337 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.608883 | orchestrator | 14:07:26.337 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.608894 | orchestrator | 14:07:26.337 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.608904 | orchestrator | 14:07:26.337 STDOUT terraform:  } 2025-08-29 14:07:28.608915 | orchestrator | 14:07:26.337 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 14:07:28.608926 | orchestrator | 14:07:26.337 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 14:07:28.608937 | orchestrator | 14:07:26.337 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.608952 | orchestrator | 14:07:26.337 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.608963 | orchestrator | 14:07:26.337 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.608974 | orchestrator | 14:07:26.337 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:07:28.608984 | orchestrator | 14:07:26.337 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.608995 | orchestrator | 14:07:26.338 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.609005 | orchestrator | 14:07:26.338 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.609016 | orchestrator | 14:07:26.338 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.609027 | orchestrator | 14:07:26.338 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.609037 | orchestrator | 14:07:26.338 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.609048 | orchestrator | 14:07:26.338 STDOUT terraform:  } 2025-08-29 14:07:28.609059 | orchestrator | 14:07:26.338 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 14:07:28.609069 | orchestrator | 14:07:26.338 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 14:07:28.609080 | orchestrator | 14:07:26.338 STDOUT terraform:  + description = "vrrp" 2025-08-29 14:07:28.609091 | orchestrator | 14:07:26.338 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:28.609102 | orchestrator | 14:07:26.338 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:28.609112 | orchestrator | 14:07:26.338 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.609123 | orchestrator | 14:07:26.338 STDOUT terraform:  + protocol = "112" 2025-08-29 14:07:28.609133 | orchestrator | 14:07:26.338 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.609144 | orchestrator | 14:07:26.338 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:28.609154 | orchestrator | 14:07:26.338 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:28.609171 | orchestrator | 14:07:26.338 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:28.609189 | orchestrator | 14:07:26.338 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:28.609200 | orchestrator | 14:07:26.338 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.609210 | orchestrator | 14:07:26.338 STDOUT terraform:  } 2025-08-29 14:07:28.609221 | orchestrator | 14:07:26.338 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 14:07:28.609232 | orchestrator | 14:07:26.338 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 14:07:28.609242 | orchestrator | 14:07:26.338 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.609253 | orchestrator | 14:07:26.338 STDOUT terraform:  + description = "management security group" 2025-08-29 14:07:28.609264 | orchestrator | 14:07:26.338 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.609274 | orchestrator | 14:07:26.338 STDOUT terraform:  + name = "testbed-management" 2025-08-29 14:07:28.609285 | orchestrator | 14:07:26.338 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.609296 | orchestrator | 14:07:26.338 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:07:28.609306 | orchestrator | 14:07:26.338 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.609317 | orchestrator | 14:07:26.338 STDOUT terraform:  } 2025-08-29 14:07:28.609328 | orchestrator | 14:07:26.338 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 14:07:28.609339 | orchestrator | 14:07:26.339 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 14:07:28.609354 | orchestrator | 14:07:26.339 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.609365 | orchestrator | 14:07:26.339 STDOUT terraform:  + description = "node security group" 2025-08-29 14:07:28.609375 | orchestrator | 14:07:26.339 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.609386 | orchestrator | 14:07:26.339 STDOUT terraform:  + name = "testbed-node" 2025-08-29 14:07:28.609397 | orchestrator | 14:07:26.339 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.609407 | orchestrator | 14:07:26.339 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:07:28.609418 | orchestrator | 14:07:26.339 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.609429 | orchestrator | 14:07:26.339 STDOUT terraform:  } 2025-08-29 14:07:28.609439 | orchestrator | 14:07:26.339 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 14:07:28.609450 | orchestrator | 14:07:26.339 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 14:07:28.609461 | orchestrator | 14:07:26.339 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:28.609471 | orchestrator | 14:07:26.339 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 14:07:28.609482 | orchestrator | 14:07:26.339 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 14:07:28.609493 | orchestrator | 14:07:26.339 STDOUT terraform:  + "8.8.8.8", 2025-08-29 14:07:28.609503 | orchestrator | 14:07:26.339 STDOUT terraform:  + "9.9.9.9", 2025-08-29 14:07:28.609539 | orchestrator | 14:07:26.339 STDOUT terraform:  ] 2025-08-29 14:07:28.609550 | orchestrator | 14:07:26.339 STDOUT terraform:  + enable_dhcp = true 2025-08-29 14:07:28.609561 | orchestrator | 14:07:26.339 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 14:07:28.609571 | orchestrator | 14:07:26.339 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.609582 | orchestrator | 14:07:26.339 STDOUT terraform:  + ip_version = 4 2025-08-29 14:07:28.609593 | orchestrator | 14:07:26.339 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 14:07:28.609604 | orchestrator | 14:07:26.339 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 14:07:28.609620 | orchestrator | 14:07:26.339 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 14:07:28.609631 | orchestrator | 14:07:26.339 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:28.609642 | orchestrator | 14:07:26.339 STDOUT terraform:  + no_gateway = false 2025-08-29 14:07:28.609652 | orchestrator | 14:07:26.339 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:28.609663 | orchestrator | 14:07:26.339 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 14:07:28.609674 | orchestrator | 14:07:26.339 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:28.609684 | orchestrator | 14:07:26.339 STDOUT terraform:  + allocation_pool { 2025-08-29 14:07:28.609695 | orchestrator | 14:07:26.339 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 14:07:28.609706 | orchestrator | 14:07:26.339 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 14:07:28.609716 | orchestrator | 14:07:26.339 STDOUT terraform:  } 2025-08-29 14:07:28.609727 | orchestrator | 14:07:26.339 STDOUT terraform:  } 2025-08-29 14:07:28.609738 | orchestrator | 14:07:26.339 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 14:07:28.609748 | orchestrator | 14:07:26.339 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 14:07:28.609759 | orchestrator | 14:07:26.339 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.609769 | orchestrator | 14:07:26.339 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:07:28.609780 | orchestrator | 14:07:26.339 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:07:28.609790 | orchestrator | 14:07:26.339 STDOUT terraform:  } 2025-08-29 14:07:28.609801 | orchestrator | 14:07:26.339 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 14:07:28.609812 | orchestrator | 14:07:26.339 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 14:07:28.609822 | orchestrator | 14:07:26.339 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:28.609833 | orchestrator | 14:07:26.339 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:07:28.609843 | orchestrator | 14:07:26.339 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:07:28.609871 | orchestrator | 14:07:26.340 STDOUT terraform:  } 2025-08-29 14:07:28.609883 | orchestrator | 14:07:26.340 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 14:07:28.609893 | orchestrator | 14:07:26.340 STDOUT terraform: Changes to Outputs: 2025-08-29 14:07:28.609912 | orchestrator | 14:07:26.340 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 14:07:28.609923 | orchestrator | 14:07:26.340 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:07:29.690067 | orchestrator | 14:07:29.689 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 14:07:29.690136 | orchestrator | 14:07:29.689 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 14:07:29.690144 | orchestrator | 14:07:29.689 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=9cd41552-0b90-b39c-505f-9834130c002d] 2025-08-29 14:07:29.690153 | orchestrator | 14:07:29.689 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=01d5b6e4-e502-ea11-3b5f-c831f599d679] 2025-08-29 14:07:29.706035 | orchestrator | 14:07:29.705 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 14:07:29.706078 | orchestrator | 14:07:29.705 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 14:07:29.709413 | orchestrator | 14:07:29.709 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 14:07:29.710813 | orchestrator | 14:07:29.710 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 14:07:29.722122 | orchestrator | 14:07:29.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 14:07:29.722157 | orchestrator | 14:07:29.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 14:07:29.722162 | orchestrator | 14:07:29.721 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 14:07:29.722167 | orchestrator | 14:07:29.722 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 14:07:29.730158 | orchestrator | 14:07:29.730 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 14:07:29.738237 | orchestrator | 14:07:29.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 14:07:30.167618 | orchestrator | 14:07:30.167 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:07:30.169345 | orchestrator | 14:07:30.169 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:07:30.172402 | orchestrator | 14:07:30.172 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 14:07:30.173425 | orchestrator | 14:07:30.173 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 14:07:30.218890 | orchestrator | 14:07:30.218 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-08-29 14:07:30.224288 | orchestrator | 14:07:30.224 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 14:07:30.793791 | orchestrator | 14:07:30.793 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=270f3dfe-214c-4326-925e-ce3d50f0f857] 2025-08-29 14:07:30.807779 | orchestrator | 14:07:30.807 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 14:07:33.391973 | orchestrator | 14:07:33.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b] 2025-08-29 14:07:33.400438 | orchestrator | 14:07:33.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 14:07:33.405302 | orchestrator | 14:07:33.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=133692c3-7f4d-47c9-95e5-0fdaff452714] 2025-08-29 14:07:33.418816 | orchestrator | 14:07:33.418 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 14:07:33.420862 | orchestrator | 14:07:33.420 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=01581e26-5f0c-4aa4-b2ea-55eb57d083c9] 2025-08-29 14:07:33.428864 | orchestrator | 14:07:33.428 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 14:07:33.445214 | orchestrator | 14:07:33.445 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=733ac04c-b863-4853-ba25-ee7fcff80598] 2025-08-29 14:07:33.457743 | orchestrator | 14:07:33.457 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 14:07:33.464595 | orchestrator | 14:07:33.464 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=e378fad3-fb01-4445-a487-4c35c34fc10d] 2025-08-29 14:07:33.474964 | orchestrator | 14:07:33.474 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 14:07:33.478652 | orchestrator | 14:07:33.478 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=c2014fab-2f96-4ac7-a596-9bdfe7e77c34] 2025-08-29 14:07:33.491329 | orchestrator | 14:07:33.491 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 14:07:33.494286 | orchestrator | 14:07:33.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4ef0722a-e89c-418b-acd0-a0241f1ecb95] 2025-08-29 14:07:33.496784 | orchestrator | 14:07:33.496 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=88f68806685892498efbf46f19f55ffe3284486d] 2025-08-29 14:07:33.503272 | orchestrator | 14:07:33.503 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 14:07:33.512683 | orchestrator | 14:07:33.512 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 14:07:33.520765 | orchestrator | 14:07:33.520 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=5ba075f44dfa66c60dd7bf66b9b7f7b953f8f7de] 2025-08-29 14:07:33.523997 | orchestrator | 14:07:33.523 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 14:07:33.559966 | orchestrator | 14:07:33.559 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=c30c9ad8-fb52-441e-a5e8-07e208e64b3b] 2025-08-29 14:07:33.852084 | orchestrator | 14:07:33.851 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=26305d6e-8929-43c9-b467-e677b222946c] 2025-08-29 14:07:34.187741 | orchestrator | 14:07:34.187 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=1e858829-7281-4cde-a348-13ef9de44bb4] 2025-08-29 14:07:34.970456 | orchestrator | 14:07:34.970 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=59e11918-5054-4372-9695-ea11be2dcd01] 2025-08-29 14:07:34.975097 | orchestrator | 14:07:34.974 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 14:07:36.875211 | orchestrator | 14:07:36.875 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=eb5dd987-b987-4c6b-9e7a-49313a8a95d8] 2025-08-29 14:07:36.875699 | orchestrator | 14:07:36.875 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=ba98bb24-5383-4aa5-9967-a5d28a51fb78] 2025-08-29 14:07:36.927005 | orchestrator | 14:07:36.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d62da809-aa2e-4162-92b4-e8a8bc4be399] 2025-08-29 14:07:36.937210 | orchestrator | 14:07:36.936 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=7332294e-ecb4-4364-9b00-941f8f59b6c8] 2025-08-29 14:07:36.974885 | orchestrator | 14:07:36.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=d3e9de19-b9f7-492e-a1b9-2626c456e661] 2025-08-29 14:07:36.978089 | orchestrator | 14:07:36.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=39f7f322-d91e-4501-9bc1-b2112ccf4f55] 2025-08-29 14:07:37.589976 | orchestrator | 14:07:37.589 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=0d4a20cd-ad55-46aa-813c-106b6e75739d] 2025-08-29 14:07:37.593597 | orchestrator | 14:07:37.593 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 14:07:37.598576 | orchestrator | 14:07:37.598 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 14:07:37.601652 | orchestrator | 14:07:37.601 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 14:07:37.776963 | orchestrator | 14:07:37.776 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=84882d0f-0606-437f-8c2b-c7724db0bb67] 2025-08-29 14:07:37.795550 | orchestrator | 14:07:37.794 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 14:07:37.795647 | orchestrator | 14:07:37.795 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 14:07:37.802090 | orchestrator | 14:07:37.801 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 14:07:37.802754 | orchestrator | 14:07:37.802 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 14:07:37.806439 | orchestrator | 14:07:37.806 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 14:07:37.807077 | orchestrator | 14:07:37.806 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 14:07:38.051680 | orchestrator | 14:07:38.051 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=87515ccf-9b34-45c2-a418-5468ddede0a7] 2025-08-29 14:07:38.366354 | orchestrator | 14:07:38.363 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a50b8bda-408c-4bdf-a77e-f074f4fde036] 2025-08-29 14:07:38.370873 | orchestrator | 14:07:38.370 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 14:07:38.371422 | orchestrator | 14:07:38.371 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 14:07:38.372707 | orchestrator | 14:07:38.372 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 14:07:38.380448 | orchestrator | 14:07:38.380 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 14:07:38.562313 | orchestrator | 14:07:38.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=9790c0b4-4980-4e49-b2d2-76952aa6d755] 2025-08-29 14:07:38.569707 | orchestrator | 14:07:38.569 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 14:07:38.621656 | orchestrator | 14:07:38.621 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=8a89b5f1-e199-4e86-ac55-23b13de73363] 2025-08-29 14:07:38.629621 | orchestrator | 14:07:38.629 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 14:07:38.800799 | orchestrator | 14:07:38.800 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=5dcc6ab1-b894-4ab6-8b53-31cdd56077b7] 2025-08-29 14:07:38.811133 | orchestrator | 14:07:38.810 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 14:07:39.052156 | orchestrator | 14:07:39.051 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=9a59ded8-ca96-403e-a572-0e2ddc0a27e6] 2025-08-29 14:07:39.063546 | orchestrator | 14:07:39.063 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 14:07:39.295812 | orchestrator | 14:07:39.295 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=d02e06e7-d181-4e07-a01d-9cbbc948f261] 2025-08-29 14:07:39.309402 | orchestrator | 14:07:39.309 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 14:07:39.497661 | orchestrator | 14:07:39.497 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=bb4fa0bc-84a6-477a-bb43-9b86fa50f679] 2025-08-29 14:07:39.503591 | orchestrator | 14:07:39.503 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 14:07:39.525437 | orchestrator | 14:07:39.525 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=01393b6d-8ffb-4fc5-b3c1-cfcad7fcead6] 2025-08-29 14:07:39.636396 | orchestrator | 14:07:39.636 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=fe16f0c0-472d-46ac-9237-17435d6cda8b] 2025-08-29 14:07:39.654505 | orchestrator | 14:07:39.654 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=d5ec4786-2cc5-4f9f-8c7d-262430c6aba0] 2025-08-29 14:07:39.747256 | orchestrator | 14:07:39.747 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=a53ce6b3-6588-49ab-8a0d-e62eb7259c0b] 2025-08-29 14:07:39.784440 | orchestrator | 14:07:39.784 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=4d8a1a6b-a3dc-4a9c-ba66-793eb6e70a04] 2025-08-29 14:07:39.947047 | orchestrator | 14:07:39.946 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=81c185fd-d021-4c92-8196-9262fd035146] 2025-08-29 14:07:40.297465 | orchestrator | 14:07:40.297 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=4bf77d97-4586-4bba-b301-900cd43dcf5a] 2025-08-29 14:07:40.444317 | orchestrator | 14:07:40.444 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=7233d798-a3d5-4d72-a786-8fcac911df09] 2025-08-29 14:07:40.553537 | orchestrator | 14:07:40.553 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=b513403d-9e33-4876-ac31-305677bf5499] 2025-08-29 14:07:43.262360 | orchestrator | 14:07:43.261 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=9daf80b5-0a9e-45bc-8034-2f022bd92ea6] 2025-08-29 14:07:43.283253 | orchestrator | 14:07:43.283 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 14:07:43.322357 | orchestrator | 14:07:43.322 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 14:07:43.322419 | orchestrator | 14:07:43.322 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 14:07:43.322989 | orchestrator | 14:07:43.322 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 14:07:43.338679 | orchestrator | 14:07:43.338 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 14:07:43.353507 | orchestrator | 14:07:43.353 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 14:07:43.357940 | orchestrator | 14:07:43.357 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 14:07:44.814582 | orchestrator | 14:07:44.814 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=d56bfc2e-c841-4d42-a9e0-1b2dce051e8b] 2025-08-29 14:07:44.828929 | orchestrator | 14:07:44.828 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 14:07:44.831291 | orchestrator | 14:07:44.831 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 14:07:44.836339 | orchestrator | 14:07:44.836 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 14:07:44.837863 | orchestrator | 14:07:44.837 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=57db01c4c43b7ac6548e11ab5dc029e491594aed] 2025-08-29 14:07:44.842484 | orchestrator | 14:07:44.842 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=cccce069542cc08c3d134ac5278e8b10c9f2b75f] 2025-08-29 14:07:45.707253 | orchestrator | 14:07:45.706 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=d56bfc2e-c841-4d42-a9e0-1b2dce051e8b] 2025-08-29 14:07:53.327439 | orchestrator | 14:07:53.327 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 14:07:53.327713 | orchestrator | 14:07:53.327 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 14:07:53.327940 | orchestrator | 14:07:53.327 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 14:07:53.348686 | orchestrator | 14:07:53.348 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 14:07:53.356713 | orchestrator | 14:07:53.356 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 14:07:53.361078 | orchestrator | 14:07:53.360 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 14:08:03.328451 | orchestrator | 14:08:03.328 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 14:08:03.328891 | orchestrator | 14:08:03.328 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 14:08:03.329310 | orchestrator | 14:08:03.329 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 14:08:03.349866 | orchestrator | 14:08:03.349 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 14:08:03.357849 | orchestrator | 14:08:03.357 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 14:08:03.362312 | orchestrator | 14:08:03.362 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 14:08:13.330586 | orchestrator | 14:08:13.330 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 14:08:13.330719 | orchestrator | 14:08:13.330 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-08-29 14:08:13.330797 | orchestrator | 14:08:13.330 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-08-29 14:08:13.350300 | orchestrator | 14:08:13.349 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-08-29 14:08:13.358717 | orchestrator | 14:08:13.358 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-08-29 14:08:13.362710 | orchestrator | 14:08:13.362 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-08-29 14:08:13.806197 | orchestrator | 14:08:13.805 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=eabb5242-8fe8-4439-aa8a-934c25ba56cf] 2025-08-29 14:08:13.904951 | orchestrator | 14:08:13.904 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=3df2974f-98d8-4ef0-a959-8cd0b9c50cc4] 2025-08-29 14:08:14.002162 | orchestrator | 14:08:14.001 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=d444716e-a0a7-4e11-a72b-ae80bfbab0d8] 2025-08-29 14:08:14.038014 | orchestrator | 14:08:14.037 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=aa46c523-667a-4781-adc7-d7e6e4141a42] 2025-08-29 14:08:23.359279 | orchestrator | 14:08:23.358 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-08-29 14:08:23.363663 | orchestrator | 14:08:23.363 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-08-29 14:08:24.092111 | orchestrator | 14:08:24.091 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=f062de4f-86d0-4c35-8313-b101077377cf] 2025-08-29 14:08:24.096474 | orchestrator | 14:08:24.096 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=f8043f52-7c2f-46af-8585-2d61112883a3] 2025-08-29 14:08:24.111460 | orchestrator | 14:08:24.111 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 14:08:24.122266 | orchestrator | 14:08:24.122 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 14:08:24.139720 | orchestrator | 14:08:24.139 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1346826876320321199] 2025-08-29 14:08:24.144105 | orchestrator | 14:08:24.143 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 14:08:24.144370 | orchestrator | 14:08:24.144 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 14:08:24.144384 | orchestrator | 14:08:24.144 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 14:08:24.144643 | orchestrator | 14:08:24.144 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 14:08:24.147548 | orchestrator | 14:08:24.147 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 14:08:24.152364 | orchestrator | 14:08:24.152 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 14:08:24.161198 | orchestrator | 14:08:24.161 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 14:08:24.170890 | orchestrator | 14:08:24.170 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 14:08:24.185834 | orchestrator | 14:08:24.185 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 14:08:27.533257 | orchestrator | 14:08:27.532 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=f8043f52-7c2f-46af-8585-2d61112883a3/733ac04c-b863-4853-ba25-ee7fcff80598] 2025-08-29 14:08:27.555664 | orchestrator | 14:08:27.555 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=d444716e-a0a7-4e11-a72b-ae80bfbab0d8/c2014fab-2f96-4ac7-a596-9bdfe7e77c34] 2025-08-29 14:08:27.567906 | orchestrator | 14:08:27.567 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=eabb5242-8fe8-4439-aa8a-934c25ba56cf/e378fad3-fb01-4445-a487-4c35c34fc10d] 2025-08-29 14:08:27.579786 | orchestrator | 14:08:27.579 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=f8043f52-7c2f-46af-8585-2d61112883a3/6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b] 2025-08-29 14:08:27.595770 | orchestrator | 14:08:27.595 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=eabb5242-8fe8-4439-aa8a-934c25ba56cf/4ef0722a-e89c-418b-acd0-a0241f1ecb95] 2025-08-29 14:08:27.607984 | orchestrator | 14:08:27.607 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=d444716e-a0a7-4e11-a72b-ae80bfbab0d8/26305d6e-8929-43c9-b467-e677b222946c] 2025-08-29 14:08:33.684995 | orchestrator | 14:08:33.684 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=f8043f52-7c2f-46af-8585-2d61112883a3/01581e26-5f0c-4aa4-b2ea-55eb57d083c9] 2025-08-29 14:08:33.710053 | orchestrator | 14:08:33.709 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=eabb5242-8fe8-4439-aa8a-934c25ba56cf/c30c9ad8-fb52-441e-a5e8-07e208e64b3b] 2025-08-29 14:08:33.720078 | orchestrator | 14:08:33.719 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=d444716e-a0a7-4e11-a72b-ae80bfbab0d8/133692c3-7f4d-47c9-95e5-0fdaff452714] 2025-08-29 14:08:34.186808 | orchestrator | 14:08:34.186 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 14:08:44.186973 | orchestrator | 14:08:44.186 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 14:08:44.819387 | orchestrator | 14:08:44.819 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=db7d65a6-82fc-4cce-86b7-32ff13bcfda5] 2025-08-29 14:08:45.417253 | orchestrator | 14:08:45.417 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 14:08:45.417353 | orchestrator | 14:08:45.417 STDOUT terraform: Outputs: 2025-08-29 14:08:45.417368 | orchestrator | 14:08:45.417 STDOUT terraform: manager_address = 2025-08-29 14:08:45.417378 | orchestrator | 14:08:45.417 STDOUT terraform: private_key = 2025-08-29 14:08:45.670147 | orchestrator | ok: Runtime: 0:01:26.265434 2025-08-29 14:08:45.706874 | 2025-08-29 14:08:45.707078 | TASK [Fetch manager address] 2025-08-29 14:08:46.159923 | orchestrator | ok 2025-08-29 14:08:46.167721 | 2025-08-29 14:08:46.167850 | TASK [Set manager_host address] 2025-08-29 14:08:46.248130 | orchestrator | ok 2025-08-29 14:08:46.257375 | 2025-08-29 14:08:46.257553 | LOOP [Update ansible collections] 2025-08-29 14:08:49.780357 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:08:49.780793 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:08:49.781557 | orchestrator | Starting galaxy collection install process 2025-08-29 14:08:49.781624 | orchestrator | Process install dependency map 2025-08-29 14:08:49.781666 | orchestrator | Starting collection install process 2025-08-29 14:08:49.781703 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 14:08:49.781750 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-08-29 14:08:49.781793 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 14:08:49.781885 | orchestrator | ok: Item: commons Runtime: 0:00:03.187579 2025-08-29 14:08:50.667335 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:08:50.667564 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:08:50.667622 | orchestrator | Starting galaxy collection install process 2025-08-29 14:08:50.667663 | orchestrator | Process install dependency map 2025-08-29 14:08:50.667702 | orchestrator | Starting collection install process 2025-08-29 14:08:50.667737 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-08-29 14:08:50.667774 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-08-29 14:08:50.667807 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 14:08:50.667861 | orchestrator | ok: Item: services Runtime: 0:00:00.622530 2025-08-29 14:08:50.685805 | 2025-08-29 14:08:50.685978 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:09:01.231437 | orchestrator | ok 2025-08-29 14:09:01.239184 | 2025-08-29 14:09:01.239294 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:10:01.284794 | orchestrator | ok 2025-08-29 14:10:01.295878 | 2025-08-29 14:10:01.296016 | TASK [Fetch manager ssh hostkey] 2025-08-29 14:10:02.875335 | orchestrator | Output suppressed because no_log was given 2025-08-29 14:10:02.891069 | 2025-08-29 14:10:02.891238 | TASK [Get ssh keypair from terraform environment] 2025-08-29 14:10:03.426401 | orchestrator | ok: Runtime: 0:00:00.008830 2025-08-29 14:10:03.441912 | 2025-08-29 14:10:03.442073 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:10:03.481183 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 14:10:03.491197 | 2025-08-29 14:10:03.491323 | TASK [Run manager part 0] 2025-08-29 14:10:05.694817 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:10:05.931754 | orchestrator | 2025-08-29 14:10:05.931817 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 14:10:05.931825 | orchestrator | 2025-08-29 14:10:05.931840 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 14:10:07.945856 | orchestrator | ok: [testbed-manager] 2025-08-29 14:10:07.945924 | orchestrator | 2025-08-29 14:10:07.945948 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:10:07.945958 | orchestrator | 2025-08-29 14:10:07.945966 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:10:10.480058 | orchestrator | ok: [testbed-manager] 2025-08-29 14:10:10.480209 | orchestrator | 2025-08-29 14:10:10.480225 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:10:11.188285 | orchestrator | ok: [testbed-manager] 2025-08-29 14:10:11.188342 | orchestrator | 2025-08-29 14:10:11.188351 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:10:11.258566 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.258617 | orchestrator | 2025-08-29 14:10:11.258626 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 14:10:11.308894 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.308939 | orchestrator | 2025-08-29 14:10:11.308947 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:10:11.352826 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.352876 | orchestrator | 2025-08-29 14:10:11.352882 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:10:11.392900 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.392950 | orchestrator | 2025-08-29 14:10:11.392957 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:10:11.423388 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.423439 | orchestrator | 2025-08-29 14:10:11.423446 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 14:10:11.459952 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.460008 | orchestrator | 2025-08-29 14:10:11.460015 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 14:10:11.497258 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:10:11.497315 | orchestrator | 2025-08-29 14:10:11.497322 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 14:10:12.304207 | orchestrator | changed: [testbed-manager] 2025-08-29 14:10:12.304264 | orchestrator | 2025-08-29 14:10:12.304270 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 14:13:13.122090 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:13.122190 | orchestrator | 2025-08-29 14:13:13.122209 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:14:34.597628 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:34.597728 | orchestrator | 2025-08-29 14:14:34.597744 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:15:04.105734 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:04.105788 | orchestrator | 2025-08-29 14:15:04.105798 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:15:13.775282 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:13.775387 | orchestrator | 2025-08-29 14:15:13.775397 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:15:13.827048 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:13.827092 | orchestrator | 2025-08-29 14:15:13.827101 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 14:15:14.653123 | orchestrator | ok: [testbed-manager] 2025-08-29 14:15:14.653198 | orchestrator | 2025-08-29 14:15:14.653214 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 14:15:15.380302 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:15.380406 | orchestrator | 2025-08-29 14:15:15.380423 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 14:15:22.025513 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:22.025607 | orchestrator | 2025-08-29 14:15:22.025664 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 14:15:28.246167 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:28.246276 | orchestrator | 2025-08-29 14:15:28.246296 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 14:15:31.156438 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:31.156547 | orchestrator | 2025-08-29 14:15:31.156565 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 14:15:33.083034 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:33.083152 | orchestrator | 2025-08-29 14:15:33.083170 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 14:15:34.223188 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:15:34.223279 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:15:34.223329 | orchestrator | 2025-08-29 14:15:34.223344 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 14:15:34.266886 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:15:34.267005 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:15:34.267019 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:15:34.267033 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:15:44.700310 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:15:44.700434 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:15:44.700450 | orchestrator | 2025-08-29 14:15:44.700463 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 14:15:45.325170 | orchestrator | changed: [testbed-manager] 2025-08-29 14:15:45.325434 | orchestrator | 2025-08-29 14:15:45.325457 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 14:17:05.888908 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 14:17:05.889033 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 14:17:05.889052 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 14:17:05.889066 | orchestrator | 2025-08-29 14:17:05.889079 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 14:17:08.269846 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 14:17:08.269885 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 14:17:08.269890 | orchestrator | 2025-08-29 14:17:08.269895 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 14:17:08.269900 | orchestrator | 2025-08-29 14:17:08.269905 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:17:09.670251 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:09.670340 | orchestrator | 2025-08-29 14:17:09.670358 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:17:09.711332 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:09.711404 | orchestrator | 2025-08-29 14:17:09.711442 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:17:09.792801 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:09.792879 | orchestrator | 2025-08-29 14:17:09.792895 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:17:10.580615 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:10.580701 | orchestrator | 2025-08-29 14:17:10.580718 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:17:11.315668 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:11.315854 | orchestrator | 2025-08-29 14:17:11.315867 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:17:12.759815 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 14:17:12.759879 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 14:17:12.759894 | orchestrator | 2025-08-29 14:17:12.759922 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:17:14.176306 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:14.176378 | orchestrator | 2025-08-29 14:17:14.176393 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:17:16.061817 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:17:16.061892 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 14:17:16.061903 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:17:16.061913 | orchestrator | 2025-08-29 14:17:16.061923 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:17:16.119178 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:16.119261 | orchestrator | 2025-08-29 14:17:16.119278 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:17:16.698866 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:16.698908 | orchestrator | 2025-08-29 14:17:16.698917 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:17:16.768763 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:16.768804 | orchestrator | 2025-08-29 14:17:16.768813 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:17:17.668611 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:17:17.668676 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:17.668686 | orchestrator | 2025-08-29 14:17:17.668693 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:17:17.700401 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:17.700468 | orchestrator | 2025-08-29 14:17:17.700480 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:17:17.730812 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:17.730880 | orchestrator | 2025-08-29 14:17:17.730892 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:17:17.762101 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:17.762182 | orchestrator | 2025-08-29 14:17:17.762192 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:17:17.814972 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:17.815016 | orchestrator | 2025-08-29 14:17:17.815024 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:17:18.554054 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:18.554139 | orchestrator | 2025-08-29 14:17:18.554184 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:17:18.554198 | orchestrator | 2025-08-29 14:17:18.554209 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:17:19.996989 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:19.997076 | orchestrator | 2025-08-29 14:17:19.997092 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 14:17:20.982833 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:20.982914 | orchestrator | 2025-08-29 14:17:20.982929 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:17:20.982942 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:17:20.982954 | orchestrator | 2025-08-29 14:17:21.275949 | orchestrator | ok: Runtime: 0:07:17.284231 2025-08-29 14:17:21.292446 | 2025-08-29 14:17:21.292596 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 14:17:21.331101 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 14:17:21.341185 | 2025-08-29 14:17:21.341312 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:17:21.374117 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 14:17:21.381386 | 2025-08-29 14:17:21.381495 | TASK [Run manager part 1 + 2] 2025-08-29 14:17:23.308881 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:17:23.448592 | orchestrator | 2025-08-29 14:17:23.448661 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 14:17:23.448678 | orchestrator | 2025-08-29 14:17:23.448708 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:17:26.530591 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:26.530655 | orchestrator | 2025-08-29 14:17:26.530697 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:17:26.564986 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:26.565024 | orchestrator | 2025-08-29 14:17:26.565034 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:17:26.605776 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:26.605830 | orchestrator | 2025-08-29 14:17:26.605844 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:17:26.655944 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:26.655986 | orchestrator | 2025-08-29 14:17:26.655997 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:17:26.730452 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:26.730499 | orchestrator | 2025-08-29 14:17:26.730508 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:17:26.793555 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:26.793593 | orchestrator | 2025-08-29 14:17:26.793600 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:17:26.841328 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 14:17:26.841382 | orchestrator | 2025-08-29 14:17:26.841394 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:17:27.578551 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:27.578613 | orchestrator | 2025-08-29 14:17:27.578631 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:17:27.630682 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:27.630732 | orchestrator | 2025-08-29 14:17:27.630745 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:17:29.009922 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:29.009961 | orchestrator | 2025-08-29 14:17:29.009970 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:17:29.609173 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:29.609239 | orchestrator | 2025-08-29 14:17:29.609255 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:17:30.796166 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:30.796208 | orchestrator | 2025-08-29 14:17:30.796219 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:17:47.904777 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:47.905441 | orchestrator | 2025-08-29 14:17:47.905455 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:17:48.564438 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:48.564508 | orchestrator | 2025-08-29 14:17:48.564525 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:17:48.618679 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:48.618739 | orchestrator | 2025-08-29 14:17:48.618753 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 14:17:49.608159 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:49.608237 | orchestrator | 2025-08-29 14:17:49.608250 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 14:17:50.600584 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:50.600634 | orchestrator | 2025-08-29 14:17:50.600643 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 14:17:51.166369 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:51.166417 | orchestrator | 2025-08-29 14:17:51.166426 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 14:17:51.209185 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:17:51.209314 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:17:51.209332 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:17:51.209345 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:17:57.607262 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:57.607488 | orchestrator | 2025-08-29 14:17:57.607509 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 14:18:06.758872 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 14:18:06.758919 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 14:18:06.758931 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 14:18:06.758941 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 14:18:06.758951 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 14:18:06.758957 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 14:18:06.758963 | orchestrator | 2025-08-29 14:18:06.758969 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 14:18:07.803043 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:07.803166 | orchestrator | 2025-08-29 14:18:07.803186 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 14:18:07.845885 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:07.846108 | orchestrator | 2025-08-29 14:18:07.846151 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 14:18:11.043962 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:11.044001 | orchestrator | 2025-08-29 14:18:11.044009 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 14:18:11.082663 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:11.082700 | orchestrator | 2025-08-29 14:18:11.082707 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 14:19:59.449389 | orchestrator | changed: [testbed-manager] 2025-08-29 14:19:59.449427 | orchestrator | 2025-08-29 14:19:59.449435 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:20:00.774586 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:00.774626 | orchestrator | 2025-08-29 14:20:00.774633 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:20:00.774641 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 14:20:00.774646 | orchestrator | 2025-08-29 14:20:01.023863 | orchestrator | ok: Runtime: 0:02:39.174600 2025-08-29 14:20:01.042529 | 2025-08-29 14:20:01.042720 | TASK [Reboot manager] 2025-08-29 14:20:02.580944 | orchestrator | ok: Runtime: 0:00:01.042933 2025-08-29 14:20:02.589779 | 2025-08-29 14:20:02.589911 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:20:19.024263 | orchestrator | ok 2025-08-29 14:20:19.036284 | 2025-08-29 14:20:19.036558 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:21:19.085417 | orchestrator | ok 2025-08-29 14:21:19.094528 | 2025-08-29 14:21:19.094673 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 14:21:21.892875 | orchestrator | 2025-08-29 14:21:21.893069 | orchestrator | # DEPLOY MANAGER 2025-08-29 14:21:21.893095 | orchestrator | 2025-08-29 14:21:21.893110 | orchestrator | + set -e 2025-08-29 14:21:21.893124 | orchestrator | + echo 2025-08-29 14:21:21.893137 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 14:21:21.893153 | orchestrator | + echo 2025-08-29 14:21:21.893202 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 14:21:21.896344 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 14:21:21.896370 | orchestrator | 2025-08-29 14:21:21.896383 | orchestrator | export CEPH_VERSION=reef 2025-08-29 14:21:21.896395 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 14:21:21.896407 | orchestrator | export MANAGER_VERSION=9.2.0 2025-08-29 14:21:21.896428 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 14:21:21.896440 | orchestrator | 2025-08-29 14:21:21.896457 | orchestrator | export ARA=false 2025-08-29 14:21:21.896468 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 14:21:21.896485 | orchestrator | export TEMPEST=false 2025-08-29 14:21:21.896497 | orchestrator | export IS_ZUUL=true 2025-08-29 14:21:21.896508 | orchestrator | 2025-08-29 14:21:21.896526 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:21:21.896537 | orchestrator | export EXTERNAL_API=false 2025-08-29 14:21:21.896548 | orchestrator | 2025-08-29 14:21:21.896559 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 14:21:21.896572 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:21.896584 | orchestrator | 2025-08-29 14:21:21.896595 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 14:21:21.896611 | orchestrator | 2025-08-29 14:21:21.896622 | orchestrator | + echo 2025-08-29 14:21:21.896638 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:21:21.897652 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:21:21.897675 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:21:21.897688 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:21:21.897699 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:21:21.897715 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:21:21.897726 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:21:21.897737 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:21:21.897751 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:21:21.897762 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:21:21.897773 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:21:21.897785 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:21:21.897902 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 14:21:21.897918 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 14:21:21.897929 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:21:21.897948 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:21:21.897959 | orchestrator | ++ export ARA=false 2025-08-29 14:21:21.897970 | orchestrator | ++ ARA=false 2025-08-29 14:21:21.897981 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:21:21.897992 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:21:21.898003 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:21:21.898097 | orchestrator | ++ TEMPEST=false 2025-08-29 14:21:21.898122 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:21:21.898142 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:21:21.898160 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:21:21.898172 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:21:21.898182 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:21:21.898193 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:21:21.898204 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:21:21.898215 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:21:21.898231 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:21.898242 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:21.898253 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:21:21.898264 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:21:21.898278 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 14:21:21.958999 | orchestrator | + docker version 2025-08-29 14:21:22.238718 | orchestrator | Client: Docker Engine - Community 2025-08-29 14:21:22.238794 | orchestrator | Version: 27.5.1 2025-08-29 14:21:22.238808 | orchestrator | API version: 1.47 2025-08-29 14:21:22.238818 | orchestrator | Go version: go1.22.11 2025-08-29 14:21:22.238828 | orchestrator | Git commit: 9f9e405 2025-08-29 14:21:22.238838 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:21:22.238849 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:21:22.238858 | orchestrator | Context: default 2025-08-29 14:21:22.238868 | orchestrator | 2025-08-29 14:21:22.238879 | orchestrator | Server: Docker Engine - Community 2025-08-29 14:21:22.238889 | orchestrator | Engine: 2025-08-29 14:21:22.238899 | orchestrator | Version: 27.5.1 2025-08-29 14:21:22.238909 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 14:21:22.238943 | orchestrator | Go version: go1.22.11 2025-08-29 14:21:22.238953 | orchestrator | Git commit: 4c9b3b0 2025-08-29 14:21:22.238963 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:21:22.238973 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:21:22.238982 | orchestrator | Experimental: false 2025-08-29 14:21:22.238992 | orchestrator | containerd: 2025-08-29 14:21:22.239011 | orchestrator | Version: 1.7.27 2025-08-29 14:21:22.239021 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 14:21:22.239066 | orchestrator | runc: 2025-08-29 14:21:22.239077 | orchestrator | Version: 1.2.5 2025-08-29 14:21:22.239087 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 14:21:22.239097 | orchestrator | docker-init: 2025-08-29 14:21:22.239107 | orchestrator | Version: 0.19.0 2025-08-29 14:21:22.239118 | orchestrator | GitCommit: de40ad0 2025-08-29 14:21:22.242874 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 14:21:22.250202 | orchestrator | + set -e 2025-08-29 14:21:22.250224 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:21:22.250235 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:21:22.250244 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:21:22.250254 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:21:22.250263 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:21:22.250273 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:21:22.250283 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:21:22.250293 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 14:21:22.250303 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 14:21:22.250312 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:21:22.250322 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:21:22.250331 | orchestrator | ++ export ARA=false 2025-08-29 14:21:22.250341 | orchestrator | ++ ARA=false 2025-08-29 14:21:22.250350 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:21:22.250360 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:21:22.250369 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:21:22.250379 | orchestrator | ++ TEMPEST=false 2025-08-29 14:21:22.250388 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:21:22.250397 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:21:22.250407 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:21:22.250417 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:21:22.250426 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:21:22.250436 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:21:22.250445 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:21:22.250454 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:21:22.250464 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:22.250474 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:22.250483 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:21:22.250492 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:21:22.250502 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:21:22.250512 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:21:22.250521 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:21:22.250531 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:21:22.250544 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:21:22.250554 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 14:21:22.250564 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-08-29 14:21:22.256889 | orchestrator | + set -e 2025-08-29 14:21:22.256908 | orchestrator | + VERSION=9.2.0 2025-08-29 14:21:22.256921 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:22.269481 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 14:21:22.269501 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:22.275018 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:22.280833 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-08-29 14:21:22.290507 | orchestrator | + set -e 2025-08-29 14:21:22.290585 | orchestrator | /opt/configuration ~ 2025-08-29 14:21:22.290599 | orchestrator | + pushd /opt/configuration 2025-08-29 14:21:22.290609 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:21:22.295564 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 14:21:22.297817 | orchestrator | ++ deactivate nondestructive 2025-08-29 14:21:22.297836 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:22.297849 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:22.297878 | orchestrator | ++ hash -r 2025-08-29 14:21:22.297888 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:22.297896 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 14:21:22.297905 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 14:21:22.297913 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 14:21:22.297922 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 14:21:22.297931 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 14:21:22.297939 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 14:21:22.297948 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 14:21:22.297957 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:22.297966 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:22.297975 | orchestrator | ++ export PATH 2025-08-29 14:21:22.297985 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:22.297994 | orchestrator | ++ '[' -z '' ']' 2025-08-29 14:21:22.298002 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 14:21:22.298011 | orchestrator | ++ PS1='(venv) ' 2025-08-29 14:21:22.298071 | orchestrator | ++ export PS1 2025-08-29 14:21:22.298080 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 14:21:22.298089 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 14:21:22.298098 | orchestrator | ++ hash -r 2025-08-29 14:21:22.298107 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-08-29 14:21:23.602397 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-08-29 14:21:23.603689 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-08-29 14:21:23.605514 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-08-29 14:21:23.607222 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-08-29 14:21:23.608911 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-08-29 14:21:23.621544 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-08-29 14:21:23.622841 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-08-29 14:21:23.623988 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-08-29 14:21:23.625505 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-08-29 14:21:23.658692 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-08-29 14:21:23.659988 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-08-29 14:21:23.661611 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-08-29 14:21:23.662881 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-08-29 14:21:23.666948 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-08-29 14:21:23.879410 | orchestrator | ++ which gilt 2025-08-29 14:21:23.967264 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-08-29 14:21:23.967312 | orchestrator | + /opt/venv/bin/gilt overlay 2025-08-29 14:21:24.131962 | orchestrator | osism.cfg-generics: 2025-08-29 14:21:24.321794 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-08-29 14:21:24.322344 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-08-29 14:21:24.322368 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-08-29 14:21:24.322381 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-08-29 14:21:24.975737 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-08-29 14:21:24.987913 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-08-29 14:21:25.328997 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-08-29 14:21:25.377588 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:21:25.377645 | orchestrator | + deactivate 2025-08-29 14:21:25.377658 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 14:21:25.377670 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:25.377681 | orchestrator | + export PATH 2025-08-29 14:21:25.377692 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 14:21:25.377704 | orchestrator | + '[' -n '' ']' 2025-08-29 14:21:25.377717 | orchestrator | + hash -r 2025-08-29 14:21:25.377728 | orchestrator | + '[' -n '' ']' 2025-08-29 14:21:25.377739 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 14:21:25.377749 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 14:21:25.377760 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 14:21:25.377779 | orchestrator | + unset -f deactivate 2025-08-29 14:21:25.377790 | orchestrator | ~ 2025-08-29 14:21:25.377801 | orchestrator | + popd 2025-08-29 14:21:25.379782 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 14:21:25.379801 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 14:21:25.380536 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 14:21:25.447201 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:21:25.447251 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 14:21:25.447265 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 14:21:25.546644 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:21:25.546723 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 14:21:25.546735 | orchestrator | ++ deactivate nondestructive 2025-08-29 14:21:25.546757 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:25.546768 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:25.546779 | orchestrator | ++ hash -r 2025-08-29 14:21:25.546790 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:25.546801 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 14:21:25.546812 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 14:21:25.546823 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 14:21:25.546838 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 14:21:25.546848 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 14:21:25.546859 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 14:21:25.546870 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 14:21:25.546881 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:25.546932 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:25.547240 | orchestrator | ++ export PATH 2025-08-29 14:21:25.547361 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:25.547377 | orchestrator | ++ '[' -z '' ']' 2025-08-29 14:21:25.547388 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 14:21:25.547505 | orchestrator | ++ PS1='(venv) ' 2025-08-29 14:21:25.547521 | orchestrator | ++ export PS1 2025-08-29 14:21:25.547532 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 14:21:25.547542 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 14:21:25.547553 | orchestrator | ++ hash -r 2025-08-29 14:21:25.547692 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 14:21:26.853903 | orchestrator | 2025-08-29 14:21:26.853983 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 14:21:26.854003 | orchestrator | 2025-08-29 14:21:26.854079 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:21:27.512994 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:27.513106 | orchestrator | 2025-08-29 14:21:27.513123 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:21:28.579999 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:28.580110 | orchestrator | 2025-08-29 14:21:28.580127 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 14:21:28.580140 | orchestrator | 2025-08-29 14:21:28.580151 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:21:31.127473 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:31.127564 | orchestrator | 2025-08-29 14:21:31.127580 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 14:21:31.200604 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:31.200696 | orchestrator | 2025-08-29 14:21:31.200713 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 14:21:31.730448 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:31.730546 | orchestrator | 2025-08-29 14:21:31.730563 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 14:21:31.768889 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:31.768975 | orchestrator | 2025-08-29 14:21:31.769002 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:21:32.155121 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:32.155219 | orchestrator | 2025-08-29 14:21:32.155236 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 14:21:32.209787 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:32.209880 | orchestrator | 2025-08-29 14:21:32.209895 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 14:21:32.579234 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:32.579359 | orchestrator | 2025-08-29 14:21:32.579384 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 14:21:32.685582 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:32.685685 | orchestrator | 2025-08-29 14:21:32.685702 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 14:21:32.685715 | orchestrator | 2025-08-29 14:21:32.685727 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:21:34.614643 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:34.614752 | orchestrator | 2025-08-29 14:21:34.614768 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 14:21:34.736246 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 14:21:34.736344 | orchestrator | 2025-08-29 14:21:34.736359 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 14:21:34.798617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 14:21:34.798709 | orchestrator | 2025-08-29 14:21:34.798723 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 14:21:35.982559 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 14:21:35.982659 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 14:21:35.982678 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 14:21:35.982690 | orchestrator | 2025-08-29 14:21:35.982704 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 14:21:37.970995 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 14:21:37.971134 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 14:21:37.971150 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 14:21:37.971162 | orchestrator | 2025-08-29 14:21:37.971175 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 14:21:38.656868 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:21:38.656997 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:38.657015 | orchestrator | 2025-08-29 14:21:38.657051 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 14:21:39.363121 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:21:39.363218 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:39.363234 | orchestrator | 2025-08-29 14:21:39.363247 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 14:21:39.427890 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:39.427967 | orchestrator | 2025-08-29 14:21:39.427983 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 14:21:39.820917 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:39.821091 | orchestrator | 2025-08-29 14:21:39.821113 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 14:21:39.916150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 14:21:39.916272 | orchestrator | 2025-08-29 14:21:39.916301 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 14:21:41.081156 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:41.081259 | orchestrator | 2025-08-29 14:21:41.081276 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 14:21:42.001231 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:42.001340 | orchestrator | 2025-08-29 14:21:42.001358 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 14:21:52.322832 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:52.322947 | orchestrator | 2025-08-29 14:21:52.322985 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 14:21:52.384229 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:52.384309 | orchestrator | 2025-08-29 14:21:52.384322 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 14:21:52.384335 | orchestrator | 2025-08-29 14:21:52.384346 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:21:54.216923 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:54.217079 | orchestrator | 2025-08-29 14:21:54.217098 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 14:21:54.334624 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 14:21:54.334720 | orchestrator | 2025-08-29 14:21:54.334736 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 14:21:54.394715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:21:54.394822 | orchestrator | 2025-08-29 14:21:54.394840 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 14:21:57.232632 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:57.232722 | orchestrator | 2025-08-29 14:21:57.232737 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 14:21:57.281953 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:57.282097 | orchestrator | 2025-08-29 14:21:57.282113 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 14:21:57.411870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 14:21:57.411950 | orchestrator | 2025-08-29 14:21:57.411966 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 14:22:00.385144 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 14:22:00.385232 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 14:22:00.385248 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 14:22:00.385261 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 14:22:00.385272 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 14:22:00.385283 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 14:22:00.385294 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 14:22:00.385305 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 14:22:00.385316 | orchestrator | 2025-08-29 14:22:00.385331 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 14:22:01.077413 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:01.077508 | orchestrator | 2025-08-29 14:22:01.077525 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 14:22:01.728487 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:01.728572 | orchestrator | 2025-08-29 14:22:01.728584 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 14:22:01.813079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 14:22:01.813156 | orchestrator | 2025-08-29 14:22:01.813169 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 14:22:03.106409 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 14:22:03.106498 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 14:22:03.106513 | orchestrator | 2025-08-29 14:22:03.106526 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 14:22:03.795937 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:03.796061 | orchestrator | 2025-08-29 14:22:03.796079 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 14:22:03.851072 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:03.851132 | orchestrator | 2025-08-29 14:22:03.851145 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 14:22:03.910419 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:03.910461 | orchestrator | 2025-08-29 14:22:03.910474 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 14:22:03.981253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 14:22:03.981327 | orchestrator | 2025-08-29 14:22:03.981341 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 14:22:05.423862 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:22:05.423911 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:22:05.423916 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:05.423921 | orchestrator | 2025-08-29 14:22:05.423926 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 14:22:06.118302 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:06.118381 | orchestrator | 2025-08-29 14:22:06.118397 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 14:22:06.161633 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:06.161685 | orchestrator | 2025-08-29 14:22:06.161707 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 14:22:06.251688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 14:22:06.251750 | orchestrator | 2025-08-29 14:22:06.251765 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 14:22:06.810250 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:06.810327 | orchestrator | 2025-08-29 14:22:06.810339 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 14:22:07.260640 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:07.260741 | orchestrator | 2025-08-29 14:22:07.260758 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 14:22:08.540446 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 14:22:08.540552 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 14:22:08.540567 | orchestrator | 2025-08-29 14:22:08.540581 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 14:22:09.227626 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:09.227730 | orchestrator | 2025-08-29 14:22:09.227746 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 14:22:09.666296 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:09.666384 | orchestrator | 2025-08-29 14:22:09.666400 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 14:22:10.046695 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:10.046792 | orchestrator | 2025-08-29 14:22:10.046807 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 14:22:10.085438 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:10.085526 | orchestrator | 2025-08-29 14:22:10.085539 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 14:22:10.154139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 14:22:10.154228 | orchestrator | 2025-08-29 14:22:10.154242 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 14:22:10.207218 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:10.207321 | orchestrator | 2025-08-29 14:22:10.207335 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 14:22:12.338912 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 14:22:12.339076 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 14:22:12.339107 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 14:22:12.339122 | orchestrator | 2025-08-29 14:22:12.339136 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 14:22:13.078893 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:13.078994 | orchestrator | 2025-08-29 14:22:13.079059 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 14:22:13.767107 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:13.767298 | orchestrator | 2025-08-29 14:22:13.767330 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 14:22:14.502185 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:14.502322 | orchestrator | 2025-08-29 14:22:14.502339 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 14:22:14.576838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 14:22:14.576911 | orchestrator | 2025-08-29 14:22:14.576925 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 14:22:14.635167 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:14.635260 | orchestrator | 2025-08-29 14:22:14.635274 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 14:22:15.377813 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 14:22:15.377902 | orchestrator | 2025-08-29 14:22:15.377917 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 14:22:15.472884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 14:22:15.472958 | orchestrator | 2025-08-29 14:22:15.472970 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 14:22:16.242962 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:16.243070 | orchestrator | 2025-08-29 14:22:16.243085 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 14:22:16.847864 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:16.847953 | orchestrator | 2025-08-29 14:22:16.847966 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 14:22:16.909495 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:16.909541 | orchestrator | 2025-08-29 14:22:16.909556 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 14:22:16.975366 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:16.975405 | orchestrator | 2025-08-29 14:22:16.975417 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 14:22:17.824503 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:17.824609 | orchestrator | 2025-08-29 14:22:17.824626 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 14:23:30.902481 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:30.902608 | orchestrator | 2025-08-29 14:23:30.902626 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 14:23:32.229590 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:32.229691 | orchestrator | 2025-08-29 14:23:32.229707 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 14:23:32.296150 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:32.296226 | orchestrator | 2025-08-29 14:23:32.296241 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 14:23:34.952529 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:34.952637 | orchestrator | 2025-08-29 14:23:34.952654 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 14:23:35.032935 | orchestrator | ok: [testbed-manager] 2025-08-29 14:23:35.033066 | orchestrator | 2025-08-29 14:23:35.033081 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:23:35.033094 | orchestrator | 2025-08-29 14:23:35.033136 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 14:23:35.154168 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:23:35.154254 | orchestrator | 2025-08-29 14:23:35.154269 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 14:24:35.209572 | orchestrator | Pausing for 60 seconds 2025-08-29 14:24:35.209686 | orchestrator | changed: [testbed-manager] 2025-08-29 14:24:35.209701 | orchestrator | 2025-08-29 14:24:35.209712 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 14:24:40.046903 | orchestrator | changed: [testbed-manager] 2025-08-29 14:24:40.047096 | orchestrator | 2025-08-29 14:24:40.047122 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 14:25:21.866775 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 14:25:21.866876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 14:25:21.866893 | orchestrator | changed: [testbed-manager] 2025-08-29 14:25:21.866937 | orchestrator | 2025-08-29 14:25:21.866968 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 14:25:32.232236 | orchestrator | changed: [testbed-manager] 2025-08-29 14:25:32.232370 | orchestrator | 2025-08-29 14:25:32.232391 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 14:25:32.317556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 14:25:32.317629 | orchestrator | 2025-08-29 14:25:32.317643 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:25:32.317656 | orchestrator | 2025-08-29 14:25:32.317668 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 14:25:32.373537 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:25:32.373578 | orchestrator | 2025-08-29 14:25:32.373591 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:25:32.373603 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:25:32.373615 | orchestrator | 2025-08-29 14:25:32.451110 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:25:32.451174 | orchestrator | + deactivate 2025-08-29 14:25:32.451194 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 14:25:32.451207 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:25:32.451219 | orchestrator | + export PATH 2025-08-29 14:25:32.451230 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 14:25:32.451241 | orchestrator | + '[' -n '' ']' 2025-08-29 14:25:32.451252 | orchestrator | + hash -r 2025-08-29 14:25:32.451264 | orchestrator | + '[' -n '' ']' 2025-08-29 14:25:32.451275 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 14:25:32.451286 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 14:25:32.451297 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 14:25:32.451308 | orchestrator | + unset -f deactivate 2025-08-29 14:25:32.451319 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 14:25:32.459620 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:25:32.459712 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:25:32.459728 | orchestrator | + local max_attempts=60 2025-08-29 14:25:32.459741 | orchestrator | + local name=ceph-ansible 2025-08-29 14:25:32.459752 | orchestrator | + local attempt_num=1 2025-08-29 14:25:32.460202 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:25:32.498632 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:25:32.498711 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:25:32.498723 | orchestrator | + local max_attempts=60 2025-08-29 14:25:32.498735 | orchestrator | + local name=kolla-ansible 2025-08-29 14:25:32.498745 | orchestrator | + local attempt_num=1 2025-08-29 14:25:32.499145 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:25:32.533130 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:25:32.533158 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:25:32.533197 | orchestrator | + local max_attempts=60 2025-08-29 14:25:32.533208 | orchestrator | + local name=osism-ansible 2025-08-29 14:25:32.533218 | orchestrator | + local attempt_num=1 2025-08-29 14:25:32.534151 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:25:32.574676 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:25:32.574724 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:25:32.574737 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:25:33.264127 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 14:25:33.482133 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 14:25:33.482222 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482236 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482246 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 14:25:33.482256 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-08-29 14:25:33.482265 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482273 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482282 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-08-29 14:25:33.482290 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482298 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-08-29 14:25:33.482307 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482315 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-08-29 14:25:33.482323 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482332 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.482340 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-08-29 14:25:33.488403 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 14:25:33.543681 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:25:33.543765 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 14:25:33.548836 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 14:25:45.518382 | orchestrator | 2025-08-29 14:25:45 | INFO  | Task 34a57d51-02c8-4e43-bf86-2b544f8af319 (resolvconf) was prepared for execution. 2025-08-29 14:25:45.518521 | orchestrator | 2025-08-29 14:25:45 | INFO  | It takes a moment until task 34a57d51-02c8-4e43-bf86-2b544f8af319 (resolvconf) has been started and output is visible here. 2025-08-29 14:26:00.453562 | orchestrator | 2025-08-29 14:26:00.453680 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 14:26:00.453700 | orchestrator | 2025-08-29 14:26:00.453712 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:26:00.453724 | orchestrator | Friday 29 August 2025 14:25:49 +0000 (0:00:00.158) 0:00:00.158 ********* 2025-08-29 14:26:00.453735 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:00.453747 | orchestrator | 2025-08-29 14:26:00.453758 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:26:00.453770 | orchestrator | Friday 29 August 2025 14:25:54 +0000 (0:00:04.628) 0:00:04.786 ********* 2025-08-29 14:26:00.453781 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:00.453793 | orchestrator | 2025-08-29 14:26:00.453804 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:26:00.453815 | orchestrator | Friday 29 August 2025 14:25:54 +0000 (0:00:00.053) 0:00:04.840 ********* 2025-08-29 14:26:00.453826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 14:26:00.453838 | orchestrator | 2025-08-29 14:26:00.453849 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:26:00.453860 | orchestrator | Friday 29 August 2025 14:25:54 +0000 (0:00:00.075) 0:00:04.915 ********* 2025-08-29 14:26:00.453871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:26:00.453938 | orchestrator | 2025-08-29 14:26:00.453950 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:26:00.453961 | orchestrator | Friday 29 August 2025 14:25:54 +0000 (0:00:00.071) 0:00:04.986 ********* 2025-08-29 14:26:00.453972 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:00.453982 | orchestrator | 2025-08-29 14:26:00.453993 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:26:00.454004 | orchestrator | Friday 29 August 2025 14:25:55 +0000 (0:00:01.093) 0:00:06.080 ********* 2025-08-29 14:26:00.454015 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:00.454086 | orchestrator | 2025-08-29 14:26:00.454099 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:26:00.454111 | orchestrator | Friday 29 August 2025 14:25:55 +0000 (0:00:00.056) 0:00:06.136 ********* 2025-08-29 14:26:00.454124 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:00.454136 | orchestrator | 2025-08-29 14:26:00.454148 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:26:00.454161 | orchestrator | Friday 29 August 2025 14:25:56 +0000 (0:00:00.574) 0:00:06.711 ********* 2025-08-29 14:26:00.454172 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:00.454185 | orchestrator | 2025-08-29 14:26:00.454198 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:26:00.454212 | orchestrator | Friday 29 August 2025 14:25:56 +0000 (0:00:00.091) 0:00:06.803 ********* 2025-08-29 14:26:00.454224 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:00.454236 | orchestrator | 2025-08-29 14:26:00.454248 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:26:00.454260 | orchestrator | Friday 29 August 2025 14:25:56 +0000 (0:00:00.540) 0:00:07.343 ********* 2025-08-29 14:26:00.454272 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:00.454305 | orchestrator | 2025-08-29 14:26:00.454318 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:26:00.454331 | orchestrator | Friday 29 August 2025 14:25:57 +0000 (0:00:01.092) 0:00:08.436 ********* 2025-08-29 14:26:00.454343 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:00.454355 | orchestrator | 2025-08-29 14:26:00.454368 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:26:00.454381 | orchestrator | Friday 29 August 2025 14:25:58 +0000 (0:00:00.991) 0:00:09.427 ********* 2025-08-29 14:26:00.454393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 14:26:00.454405 | orchestrator | 2025-08-29 14:26:00.454419 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:26:00.454431 | orchestrator | Friday 29 August 2025 14:25:58 +0000 (0:00:00.083) 0:00:09.511 ********* 2025-08-29 14:26:00.454442 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:00.454452 | orchestrator | 2025-08-29 14:26:00.454463 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:26:00.454485 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:26:00.454497 | orchestrator | 2025-08-29 14:26:00.454508 | orchestrator | 2025-08-29 14:26:00.454518 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:26:00.454529 | orchestrator | Friday 29 August 2025 14:26:00 +0000 (0:00:01.183) 0:00:10.694 ********* 2025-08-29 14:26:00.454539 | orchestrator | =============================================================================== 2025-08-29 14:26:00.454550 | orchestrator | Gathering Facts --------------------------------------------------------- 4.63s 2025-08-29 14:26:00.454560 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2025-08-29 14:26:00.454571 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2025-08-29 14:26:00.454581 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2025-08-29 14:26:00.454592 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-08-29 14:26:00.454602 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.57s 2025-08-29 14:26:00.454631 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-08-29 14:26:00.454642 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-08-29 14:26:00.454653 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-08-29 14:26:00.454663 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-08-29 14:26:00.454674 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-08-29 14:26:00.454684 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-08-29 14:26:00.454695 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-08-29 14:26:00.788818 | orchestrator | + osism apply sshconfig 2025-08-29 14:26:12.727383 | orchestrator | 2025-08-29 14:26:12 | INFO  | Task 5255b511-9f81-492b-90ff-2e9bee11291d (sshconfig) was prepared for execution. 2025-08-29 14:26:12.727515 | orchestrator | 2025-08-29 14:26:12 | INFO  | It takes a moment until task 5255b511-9f81-492b-90ff-2e9bee11291d (sshconfig) has been started and output is visible here. 2025-08-29 14:26:25.270786 | orchestrator | 2025-08-29 14:26:25.271018 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 14:26:25.271049 | orchestrator | 2025-08-29 14:26:25.271062 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 14:26:25.271074 | orchestrator | Friday 29 August 2025 14:26:17 +0000 (0:00:00.170) 0:00:00.170 ********* 2025-08-29 14:26:25.271112 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:25.271125 | orchestrator | 2025-08-29 14:26:25.271136 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 14:26:25.271147 | orchestrator | Friday 29 August 2025 14:26:17 +0000 (0:00:00.599) 0:00:00.769 ********* 2025-08-29 14:26:25.271157 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:25.271169 | orchestrator | 2025-08-29 14:26:25.271179 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 14:26:25.271190 | orchestrator | Friday 29 August 2025 14:26:18 +0000 (0:00:00.525) 0:00:01.295 ********* 2025-08-29 14:26:25.271200 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:26:25.271212 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:26:25.271223 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:26:25.271233 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:26:25.271243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:26:25.271254 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:26:25.271265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:26:25.271275 | orchestrator | 2025-08-29 14:26:25.271286 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 14:26:25.271296 | orchestrator | Friday 29 August 2025 14:26:24 +0000 (0:00:06.073) 0:00:07.368 ********* 2025-08-29 14:26:25.271307 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:25.271317 | orchestrator | 2025-08-29 14:26:25.271330 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 14:26:25.271342 | orchestrator | Friday 29 August 2025 14:26:24 +0000 (0:00:00.072) 0:00:07.441 ********* 2025-08-29 14:26:25.271354 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:25.271366 | orchestrator | 2025-08-29 14:26:25.271378 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:26:25.271392 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:26:25.271404 | orchestrator | 2025-08-29 14:26:25.271417 | orchestrator | 2025-08-29 14:26:25.271429 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:26:25.271441 | orchestrator | Friday 29 August 2025 14:26:24 +0000 (0:00:00.587) 0:00:08.028 ********* 2025-08-29 14:26:25.271472 | orchestrator | =============================================================================== 2025-08-29 14:26:25.271484 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.07s 2025-08-29 14:26:25.271497 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-08-29 14:26:25.271509 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-08-29 14:26:25.271520 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-08-29 14:26:25.271530 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-08-29 14:26:25.585204 | orchestrator | + osism apply known-hosts 2025-08-29 14:26:37.643425 | orchestrator | 2025-08-29 14:26:37 | INFO  | Task ea1d53d7-4eef-4985-865c-9735a71b00dd (known-hosts) was prepared for execution. 2025-08-29 14:26:37.643575 | orchestrator | 2025-08-29 14:26:37 | INFO  | It takes a moment until task ea1d53d7-4eef-4985-865c-9735a71b00dd (known-hosts) has been started and output is visible here. 2025-08-29 14:26:54.371573 | orchestrator | 2025-08-29 14:26:54.371724 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 14:26:54.371740 | orchestrator | 2025-08-29 14:26:54.371753 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 14:26:54.371766 | orchestrator | Friday 29 August 2025 14:26:41 +0000 (0:00:00.157) 0:00:00.157 ********* 2025-08-29 14:26:54.371778 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:26:54.371819 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:26:54.371831 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:26:54.371842 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:26:54.371882 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:26:54.371893 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:26:54.371904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:26:54.371914 | orchestrator | 2025-08-29 14:26:54.371926 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 14:26:54.371938 | orchestrator | Friday 29 August 2025 14:26:47 +0000 (0:00:05.762) 0:00:05.919 ********* 2025-08-29 14:26:54.371950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:26:54.371963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:26:54.371974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:26:54.371984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:26:54.371995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:26:54.372006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:26:54.372016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:26:54.372026 | orchestrator | 2025-08-29 14:26:54.372038 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:26:54.372051 | orchestrator | Friday 29 August 2025 14:26:47 +0000 (0:00:00.173) 0:00:06.093 ********* 2025-08-29 14:26:54.372063 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKAa2ibto8pyvdiJc6xWNa7p6moZ1AApsQq+BtQyj8mH) 2025-08-29 14:26:54.372084 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAItf+eAqEl7j8PWVW25A26J/SbOjvKhtAjmGM6m9f5DGhXXmeMyBLV4u1Rash6Jylqlwa6WQ5WIWv6/eM43HyGi2Y0DTQSX4HARe9H1Gohmq5F4H7bSehGVuXoE6djVJ5K6mMScOW9J5tR7N+y9r+2q1htpnjvorVwLBBrmYBIlKRQD8thMMJk9TXCiwK5w6TLc4kJfklec3xbTlPb0Ry3NYjYn6PMYtHBSJRVzxO48Ved3JmxiW3hzMj2KkuGG9vdSh97zKWmgLCCVe3d7BbpFwaQmhqvhDEYiQQgYrV31FSGSrSP5W6mN7HS4UG7rY330Mb3qf4Qiwqutvo+TEjUy5GaATehm0TMR4apONIh3ZhF003moiywvvS3XJfmUZstF0gINmqcKCigksV5zaYBSv40mwMDE9DfcF/zBPCPz9M/nwHZOubgRxDMPq6eu0EfeF8Qyjr4qQAnA4VnNbS7IRnzdbMvSaaoj/ANQvmGiSuyUAs+yHsYR/Q28MA7AM=) 2025-08-29 14:26:54.372169 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDYXQlssLrspiY2GnAnU4n3rjbk0JK+y2n6Ro8yXSUUokk0jPpChQrGd7D01BLIMij/lca8cpxMYoUdFBki4hi0=) 2025-08-29 14:26:54.372183 | orchestrator | 2025-08-29 14:26:54.372194 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:26:54.372205 | orchestrator | Friday 29 August 2025 14:26:48 +0000 (0:00:01.205) 0:00:07.299 ********* 2025-08-29 14:26:54.372216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDjgXhh9UP/PkSsn9345cVtzKGkM5GlMU+jfg5M6E+nE) 2025-08-29 14:26:54.372276 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ7U5Et4sQJ3CUFGbuUSzaoXPrqLpSpka3iyCbpyICif2AulRLHJXQis9glqiWfMdm6eegMFeSU2Mo/4NVpwHtbeNQaKRjqnqJqoloS5q5hxKDzL8W54EDXvHzP5ANjVdASdXfgKS8/CIPKaSTeDhB9KnHrKixMjaJU6ZuHlNpUWRvSy+0rgtY6OjIEDZgGnskeav1FyQjLeXblA8MNu/i9Iq6GKFxeoTKU29WsILhgjOfC9NLxZ/Y0Ll2iKG9tIPFjwJgMXqnVIPc+ndUEt/i+D5zrEgAtNhvhV8wOaMeW5U7jmQSsYq0ElWkLxnVG23Bev4CkDDT/+ND40Prdq23zST76RHNn2jPrp61PuKbpkN3JdHTv6VsZb64EsZyQ9qR9QrO0+L/qaG2wD09SZdQXEgZiK6E7/7m0uAeV5DRwiFexR+9ekcgrXELVysXaA3mvkLmJZwXh4466OD9mAyJA4+myrvbFL51UxdRyoDU18gnYtRYi7nAmvSP5X8Octc=) 2025-08-29 14:26:54.372290 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQpovHB3ZWP9MSoobEFuO3+cOkSiTJ9pDWcI7kQ3Sv0dqdP/FNsQpua7Q4xvGvygVy4rgg/hiScp0xeV192sYA=) 2025-08-29 14:26:54.372301 | orchestrator | 2025-08-29 14:26:54.372312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:26:54.372322 | orchestrator | Friday 29 August 2025 14:26:49 +0000 (0:00:01.108) 0:00:08.407 ********* 2025-08-29 14:26:54.372333 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtuVkErADwtHMZ8GzcaYgQhMEJdI2bE5uzRLaFoZF/XWiEKseqG5U5V9H1OLIbeePDgw9WmSjgWErWE5DMBeQA=) 2025-08-29 14:26:54.372344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHyrK7l/KXfCzwKgDJGDc8CneyR7eWcCAJrXlvz0kGEE) 2025-08-29 14:26:54.372355 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHjpfE90/SKOquiiyMtvkgERvMG0RONnHgDDakrgh+3lN2o/7iBGq1dsXPWwWVAo8xEY/VCGELdldkuGXhmSiqniAZ+O4AgNxINfzA4r9J6C/x+suwcuu9FbVV5cYSPmnLKnteSjitSJ149RU1+Ye6y4j/sTZ1YVbK1SdM19/alx57FVAa6UQnA3MXrpj28ELJ7ISg/oZeqeHdEPhRGcdXrfgG40STs1f1Q1K04bO4/7REVg5hO03zdTezoWXGRbMt7RLE4lnULQxKZmcMgwxWVh3QrSSN+Mqt866HE/ZbwbGB5haV/MiTbSk9qdRQBl8kafzTI1dz3y3UZleqYTjxxAqvFRi3LMbFDWTPGrx5drW/RiDAtwhQ3RzeiAckUkUjNZL7clj0f7JBf0PutGKeV3Lr0DAFA5QTuh8lStGrS4oj+cUVdDN6lNIhfbeo7V6RhVlHK097Wu+ek8IoEcUj6rMvXJcKIRPwt+0CKPR/Cwivao0ZD9Eb4eJHGcxpWxc=) 2025-08-29 14:26:54.372366 | orchestrator | 2025-08-29 14:26:54.372377 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:26:54.372388 | orchestrator | Friday 29 August 2025 14:26:50 +0000 (0:00:01.086) 0:00:09.494 ********* 2025-08-29 14:26:54.372399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCAv2zvDXE3+hhtKaqy2Uj8r38bSK47OwoRKgKEdcCx8meE1Y90Olti4zjrhxudCbTQMGxUFcwGqw3yrx/o8C0Y=) 2025-08-29 14:26:54.372410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN5udjVUhWxRayYthGYPR6oOwv8oB3MCmu8WGdr/QEKU) 2025-08-29 14:26:54.372421 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQVFEIwq4h5wKkvVAf5i3arYhCERI6hpyWKiJlET3LOfmb2sYiMGBo2YbQGr/fEx0NkTGchQroa+4YpOT7O564oZNZrF0MDwk9CRvUId8O8QQp1cxZeG3x6ybhjVDW660MNrm2QrF3hFhFVx/N8Nftt9SBMH2HprMq6NHl9volQH4w3n2pW2uintackZNxEvIiktseUiCTbYsYZTPX2qX/Dq6cE3t91CF1edTGxShA3cQSBv3e8g3pksvjLmHY47z0eXDMU8yiHOlcig1BT9jEZUtBZKaeGJxZ4OdySzzTbJxDI8AOAHxloH5t+QO98h46nV/SYKcLD5F6wJeMa+fpLo6sB54Ka6EO391A7eD2/KPnJWikmOSdfnJt4C8iiOGqYW2X9R8CLJiWJe7k9txjdJO1/CfnjEBPywCNp4Pa6g0rGM/fFM0zqWzLA7kzAZj/0FPJhVjT07Ln3uAn6PV7R2GEtiUnKxE5Vp5J8BwKh5MQZkDu4GNp4wrX5H6OQ0k=) 2025-08-29 14:26:54.372431 | orchestrator | 2025-08-29 14:26:54.372442 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:26:54.372453 | orchestrator | Friday 29 August 2025 14:26:52 +0000 (0:00:01.141) 0:00:10.636 ********* 2025-08-29 14:26:54.372464 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDec99D61bDO505S5miUb2gBo8dnzcOgvbcbI4e0XVKUHQuoBDnpmPoHlTUEvph/4qoj86OcD92t73cGcSMIijx32w8n4VYwhvfTNAAaVRgveDNsLMOPtmrQ85Wn2OIuAA8syqivvrkTwD+3o5JVC/vDU9uApp4RxkViQaIy3q8xdyowXmalIVX3qMoPIynk/zd92YY3gzFCQ5Tz8iArSrEfk9yaGZhqvq2exUqC92ZVW+lkPvilsNwn5wzw5JzTYD75Kdy4ba94A7crQBMa4MKQTHdTQScDC5sFDWANj6h3oUpNyqMBjseJ9wF8392HGP+BQGYEbjbVlwy31K16GZ5mzhaD3CzLhxrviQqZIWko02hyrH8fs5dllLKUhOh6vk6zjh7hvVlp0ZfhDACfFtloCVZIE8+iKm9ifQUxarLlsSEjm8O1rvZO/niRexUSvXiKzohPAW8AL3gzrJGSUIVKFEAKhlOX9ETjjrigcMUF/ioNKaTpDBQkvXh80tzNF0=) 2025-08-29 14:26:54.372489 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ6FES162/82AYArOw4kR83+WhkuRo2k9A6NR01703anOOUKgcM5bP3bL0AqVXA/okOqa3V/BaMW7h95nBpT/7Q=) 2025-08-29 14:26:54.372506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICsP/INjYSJbHmKCTivFaNIoMpOkXhZpDQlPBNE1ZRXs) 2025-08-29 14:26:54.372517 | orchestrator | 2025-08-29 14:26:54.372528 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:26:54.372539 | orchestrator | Friday 29 August 2025 14:26:53 +0000 (0:00:01.126) 0:00:11.762 ********* 2025-08-29 14:26:54.372564 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXsGqlnXsygKkgistmJ/MPPZKRF1hSZepH0HrH9kO3bYcm1P2uSoytG6mEopd8x1PtoxN23TsUuM420nf10B/tWYzdCCvb2ZyfnLMyPFwP6DzJ/aw6vibtkGfNQwNvl9F7YymzcGKNDozwNUbi+i/oRj/tD3IduUu1rBy6K6nrBykIKK+bVF47JkL63JWPx+BTL7lLnYOdbyiS7WcAY05c5g01+X5hJcuAljfaHjEKF1QqGByd7SuPgdyFM2ad3lFfX+n+DQc+0hdN5mfAoCaJXErRpTaVeCfOQ6Cxpigzkykj3JnMWHr1hdBVP2pTRSKiiGsc10cXli6s003FH7ms7W8JmtHzQh/z88wdRyi1CitS+/vsaU4KvPuqgbLoZJx3HqKsqgi0s8lzwNwnuX8P7YZLh0XCzsziRFCtbpsLZ2ItE0UlSO/2jdjCSFH1Me34pjYEGzuGh1nRmQ9oKccpG0/ir6m0jw1oEEWszDLGsWMOmjjmvwyxnPb5Q25PqkE=) 2025-08-29 14:27:05.427267 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMaKPZIdnudEbuzMhTYWgx1f6gSC4YZiUlSanSPjiZDE0wRIPZPh/15nJrh5m9HxUpr0S8b0ArBoddWBqV5+QB0=) 2025-08-29 14:27:05.427420 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP0nLTNbvd9e8zYaR5QmnihSqSoUJNYIt2umnankBfSg) 2025-08-29 14:27:05.427438 | orchestrator | 2025-08-29 14:27:05.427453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:05.427467 | orchestrator | Friday 29 August 2025 14:26:54 +0000 (0:00:01.110) 0:00:12.872 ********* 2025-08-29 14:27:05.427479 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqOvxZ5Ct2i8VlOXgyGOoZBTuLJZi+9sBJIlt0Cpm++zi0KmPkEpPYHy+Rb5q6UrMMltoClq4EGbojYxk7U5Hg=) 2025-08-29 14:27:05.427493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX5gFV4+nQz8Vy7WYKH1tVpSGpT/p71NywKiq0CIa9g2cGgHYYzZ+kErgU8Fy2NrixzUp6P3AEyM7z3ebDgCo/ITyickf5ZSomJWpUjDrGgIBDyPmzbcMC4YS820Hjhp2ulIYO0NJ7SLFLV5AMLUbRUq85L3dvikJ0w/VgpbrtDsE6kuwEG5SMFew3vi8GXU89IGvbuUNJnCHTANRk3Y7Oj01n5f+GXUkqH6ORFeeO8cX33VKTgr13bsEb3oZws2wqEjTONxP+5KQ/z98e9t7i4YWpjWxMNUhXSox6Xfq7M+nWbN7yLQWax72mp/kJykPRArG8AveF2oAsR6hS3LpxGQDk3NSsEx+aw5qsaogXCohWPNb1wjedvZ1dfJVJxiEkkxz9kvu+24TRq6vYMVeHzcoj/OeNCQSv5zEQ1T4JbFiLDz9vNTRzGZjUsJm2Y8VDZEeMcaRIM68AuY3ljRH+/GO4EkVtqo9WBdHo+J2uPWnirKs5SFNQo1YMZxOHifM=) 2025-08-29 14:27:05.427507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINJhawWLDKwFiWnyNdmkwqIPwYJ2fgilIAdhtqDfyjkm) 2025-08-29 14:27:05.427518 | orchestrator | 2025-08-29 14:27:05.427530 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 14:27:05.427542 | orchestrator | Friday 29 August 2025 14:26:55 +0000 (0:00:01.123) 0:00:13.996 ********* 2025-08-29 14:27:05.427554 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:27:05.427565 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:27:05.427576 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:27:05.427587 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:27:05.427598 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:27:05.427640 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:27:05.427651 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:27:05.427662 | orchestrator | 2025-08-29 14:27:05.427674 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 14:27:05.427686 | orchestrator | Friday 29 August 2025 14:27:00 +0000 (0:00:05.313) 0:00:19.310 ********* 2025-08-29 14:27:05.427700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:27:05.427713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:27:05.427724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:27:05.427735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:27:05.427746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:27:05.427757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:27:05.427770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:27:05.427783 | orchestrator | 2025-08-29 14:27:05.427795 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:05.427808 | orchestrator | Friday 29 August 2025 14:27:00 +0000 (0:00:00.178) 0:00:19.488 ********* 2025-08-29 14:27:05.427822 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKAa2ibto8pyvdiJc6xWNa7p6moZ1AApsQq+BtQyj8mH) 2025-08-29 14:27:05.427911 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAItf+eAqEl7j8PWVW25A26J/SbOjvKhtAjmGM6m9f5DGhXXmeMyBLV4u1Rash6Jylqlwa6WQ5WIWv6/eM43HyGi2Y0DTQSX4HARe9H1Gohmq5F4H7bSehGVuXoE6djVJ5K6mMScOW9J5tR7N+y9r+2q1htpnjvorVwLBBrmYBIlKRQD8thMMJk9TXCiwK5w6TLc4kJfklec3xbTlPb0Ry3NYjYn6PMYtHBSJRVzxO48Ved3JmxiW3hzMj2KkuGG9vdSh97zKWmgLCCVe3d7BbpFwaQmhqvhDEYiQQgYrV31FSGSrSP5W6mN7HS4UG7rY330Mb3qf4Qiwqutvo+TEjUy5GaATehm0TMR4apONIh3ZhF003moiywvvS3XJfmUZstF0gINmqcKCigksV5zaYBSv40mwMDE9DfcF/zBPCPz9M/nwHZOubgRxDMPq6eu0EfeF8Qyjr4qQAnA4VnNbS7IRnzdbMvSaaoj/ANQvmGiSuyUAs+yHsYR/Q28MA7AM=) 2025-08-29 14:27:05.427927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDYXQlssLrspiY2GnAnU4n3rjbk0JK+y2n6Ro8yXSUUokk0jPpChQrGd7D01BLIMij/lca8cpxMYoUdFBki4hi0=) 2025-08-29 14:27:05.427940 | orchestrator | 2025-08-29 14:27:05.427953 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:05.427966 | orchestrator | Friday 29 August 2025 14:27:02 +0000 (0:00:01.131) 0:00:20.620 ********* 2025-08-29 14:27:05.427980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ7U5Et4sQJ3CUFGbuUSzaoXPrqLpSpka3iyCbpyICif2AulRLHJXQis9glqiWfMdm6eegMFeSU2Mo/4NVpwHtbeNQaKRjqnqJqoloS5q5hxKDzL8W54EDXvHzP5ANjVdASdXfgKS8/CIPKaSTeDhB9KnHrKixMjaJU6ZuHlNpUWRvSy+0rgtY6OjIEDZgGnskeav1FyQjLeXblA8MNu/i9Iq6GKFxeoTKU29WsILhgjOfC9NLxZ/Y0Ll2iKG9tIPFjwJgMXqnVIPc+ndUEt/i+D5zrEgAtNhvhV8wOaMeW5U7jmQSsYq0ElWkLxnVG23Bev4CkDDT/+ND40Prdq23zST76RHNn2jPrp61PuKbpkN3JdHTv6VsZb64EsZyQ9qR9QrO0+L/qaG2wD09SZdQXEgZiK6E7/7m0uAeV5DRwiFexR+9ekcgrXELVysXaA3mvkLmJZwXh4466OD9mAyJA4+myrvbFL51UxdRyoDU18gnYtRYi7nAmvSP5X8Octc=) 2025-08-29 14:27:05.428003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPQpovHB3ZWP9MSoobEFuO3+cOkSiTJ9pDWcI7kQ3Sv0dqdP/FNsQpua7Q4xvGvygVy4rgg/hiScp0xeV192sYA=) 2025-08-29 14:27:05.428016 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDjgXhh9UP/PkSsn9345cVtzKGkM5GlMU+jfg5M6E+nE) 2025-08-29 14:27:05.428027 | orchestrator | 2025-08-29 14:27:05.428038 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:05.428049 | orchestrator | Friday 29 August 2025 14:27:03 +0000 (0:00:01.084) 0:00:21.704 ********* 2025-08-29 14:27:05.428060 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMtuVkErADwtHMZ8GzcaYgQhMEJdI2bE5uzRLaFoZF/XWiEKseqG5U5V9H1OLIbeePDgw9WmSjgWErWE5DMBeQA=) 2025-08-29 14:27:05.428072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHjpfE90/SKOquiiyMtvkgERvMG0RONnHgDDakrgh+3lN2o/7iBGq1dsXPWwWVAo8xEY/VCGELdldkuGXhmSiqniAZ+O4AgNxINfzA4r9J6C/x+suwcuu9FbVV5cYSPmnLKnteSjitSJ149RU1+Ye6y4j/sTZ1YVbK1SdM19/alx57FVAa6UQnA3MXrpj28ELJ7ISg/oZeqeHdEPhRGcdXrfgG40STs1f1Q1K04bO4/7REVg5hO03zdTezoWXGRbMt7RLE4lnULQxKZmcMgwxWVh3QrSSN+Mqt866HE/ZbwbGB5haV/MiTbSk9qdRQBl8kafzTI1dz3y3UZleqYTjxxAqvFRi3LMbFDWTPGrx5drW/RiDAtwhQ3RzeiAckUkUjNZL7clj0f7JBf0PutGKeV3Lr0DAFA5QTuh8lStGrS4oj+cUVdDN6lNIhfbeo7V6RhVlHK097Wu+ek8IoEcUj6rMvXJcKIRPwt+0CKPR/Cwivao0ZD9Eb4eJHGcxpWxc=) 2025-08-29 14:27:05.428083 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHyrK7l/KXfCzwKgDJGDc8CneyR7eWcCAJrXlvz0kGEE) 2025-08-29 14:27:05.428094 | orchestrator | 2025-08-29 14:27:05.428105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:05.428116 | orchestrator | Friday 29 August 2025 14:27:04 +0000 (0:00:01.107) 0:00:22.812 ********* 2025-08-29 14:27:05.428126 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCAv2zvDXE3+hhtKaqy2Uj8r38bSK47OwoRKgKEdcCx8meE1Y90Olti4zjrhxudCbTQMGxUFcwGqw3yrx/o8C0Y=) 2025-08-29 14:27:05.428137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN5udjVUhWxRayYthGYPR6oOwv8oB3MCmu8WGdr/QEKU) 2025-08-29 14:27:05.428156 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQVFEIwq4h5wKkvVAf5i3arYhCERI6hpyWKiJlET3LOfmb2sYiMGBo2YbQGr/fEx0NkTGchQroa+4YpOT7O564oZNZrF0MDwk9CRvUId8O8QQp1cxZeG3x6ybhjVDW660MNrm2QrF3hFhFVx/N8Nftt9SBMH2HprMq6NHl9volQH4w3n2pW2uintackZNxEvIiktseUiCTbYsYZTPX2qX/Dq6cE3t91CF1edTGxShA3cQSBv3e8g3pksvjLmHY47z0eXDMU8yiHOlcig1BT9jEZUtBZKaeGJxZ4OdySzzTbJxDI8AOAHxloH5t+QO98h46nV/SYKcLD5F6wJeMa+fpLo6sB54Ka6EO391A7eD2/KPnJWikmOSdfnJt4C8iiOGqYW2X9R8CLJiWJe7k9txjdJO1/CfnjEBPywCNp4Pa6g0rGM/fFM0zqWzLA7kzAZj/0FPJhVjT07Ln3uAn6PV7R2GEtiUnKxE5Vp5J8BwKh5MQZkDu4GNp4wrX5H6OQ0k=) 2025-08-29 14:27:09.953557 | orchestrator | 2025-08-29 14:27:09.953687 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:09.953704 | orchestrator | Friday 29 August 2025 14:27:05 +0000 (0:00:01.117) 0:00:23.929 ********* 2025-08-29 14:27:09.953718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ6FES162/82AYArOw4kR83+WhkuRo2k9A6NR01703anOOUKgcM5bP3bL0AqVXA/okOqa3V/BaMW7h95nBpT/7Q=) 2025-08-29 14:27:09.953735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDec99D61bDO505S5miUb2gBo8dnzcOgvbcbI4e0XVKUHQuoBDnpmPoHlTUEvph/4qoj86OcD92t73cGcSMIijx32w8n4VYwhvfTNAAaVRgveDNsLMOPtmrQ85Wn2OIuAA8syqivvrkTwD+3o5JVC/vDU9uApp4RxkViQaIy3q8xdyowXmalIVX3qMoPIynk/zd92YY3gzFCQ5Tz8iArSrEfk9yaGZhqvq2exUqC92ZVW+lkPvilsNwn5wzw5JzTYD75Kdy4ba94A7crQBMa4MKQTHdTQScDC5sFDWANj6h3oUpNyqMBjseJ9wF8392HGP+BQGYEbjbVlwy31K16GZ5mzhaD3CzLhxrviQqZIWko02hyrH8fs5dllLKUhOh6vk6zjh7hvVlp0ZfhDACfFtloCVZIE8+iKm9ifQUxarLlsSEjm8O1rvZO/niRexUSvXiKzohPAW8AL3gzrJGSUIVKFEAKhlOX9ETjjrigcMUF/ioNKaTpDBQkvXh80tzNF0=) 2025-08-29 14:27:09.953784 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICsP/INjYSJbHmKCTivFaNIoMpOkXhZpDQlPBNE1ZRXs) 2025-08-29 14:27:09.953798 | orchestrator | 2025-08-29 14:27:09.953810 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:09.953821 | orchestrator | Friday 29 August 2025 14:27:06 +0000 (0:00:01.132) 0:00:25.061 ********* 2025-08-29 14:27:09.953832 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP0nLTNbvd9e8zYaR5QmnihSqSoUJNYIt2umnankBfSg) 2025-08-29 14:27:09.953900 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXsGqlnXsygKkgistmJ/MPPZKRF1hSZepH0HrH9kO3bYcm1P2uSoytG6mEopd8x1PtoxN23TsUuM420nf10B/tWYzdCCvb2ZyfnLMyPFwP6DzJ/aw6vibtkGfNQwNvl9F7YymzcGKNDozwNUbi+i/oRj/tD3IduUu1rBy6K6nrBykIKK+bVF47JkL63JWPx+BTL7lLnYOdbyiS7WcAY05c5g01+X5hJcuAljfaHjEKF1QqGByd7SuPgdyFM2ad3lFfX+n+DQc+0hdN5mfAoCaJXErRpTaVeCfOQ6Cxpigzkykj3JnMWHr1hdBVP2pTRSKiiGsc10cXli6s003FH7ms7W8JmtHzQh/z88wdRyi1CitS+/vsaU4KvPuqgbLoZJx3HqKsqgi0s8lzwNwnuX8P7YZLh0XCzsziRFCtbpsLZ2ItE0UlSO/2jdjCSFH1Me34pjYEGzuGh1nRmQ9oKccpG0/ir6m0jw1oEEWszDLGsWMOmjjmvwyxnPb5Q25PqkE=) 2025-08-29 14:27:09.953935 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMaKPZIdnudEbuzMhTYWgx1f6gSC4YZiUlSanSPjiZDE0wRIPZPh/15nJrh5m9HxUpr0S8b0ArBoddWBqV5+QB0=) 2025-08-29 14:27:09.953946 | orchestrator | 2025-08-29 14:27:09.953958 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:09.953969 | orchestrator | Friday 29 August 2025 14:27:07 +0000 (0:00:01.120) 0:00:26.182 ********* 2025-08-29 14:27:09.953980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX5gFV4+nQz8Vy7WYKH1tVpSGpT/p71NywKiq0CIa9g2cGgHYYzZ+kErgU8Fy2NrixzUp6P3AEyM7z3ebDgCo/ITyickf5ZSomJWpUjDrGgIBDyPmzbcMC4YS820Hjhp2ulIYO0NJ7SLFLV5AMLUbRUq85L3dvikJ0w/VgpbrtDsE6kuwEG5SMFew3vi8GXU89IGvbuUNJnCHTANRk3Y7Oj01n5f+GXUkqH6ORFeeO8cX33VKTgr13bsEb3oZws2wqEjTONxP+5KQ/z98e9t7i4YWpjWxMNUhXSox6Xfq7M+nWbN7yLQWax72mp/kJykPRArG8AveF2oAsR6hS3LpxGQDk3NSsEx+aw5qsaogXCohWPNb1wjedvZ1dfJVJxiEkkxz9kvu+24TRq6vYMVeHzcoj/OeNCQSv5zEQ1T4JbFiLDz9vNTRzGZjUsJm2Y8VDZEeMcaRIM68AuY3ljRH+/GO4EkVtqo9WBdHo+J2uPWnirKs5SFNQo1YMZxOHifM=) 2025-08-29 14:27:09.953992 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIqOvxZ5Ct2i8VlOXgyGOoZBTuLJZi+9sBJIlt0Cpm++zi0KmPkEpPYHy+Rb5q6UrMMltoClq4EGbojYxk7U5Hg=) 2025-08-29 14:27:09.954003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINJhawWLDKwFiWnyNdmkwqIPwYJ2fgilIAdhtqDfyjkm) 2025-08-29 14:27:09.954014 | orchestrator | 2025-08-29 14:27:09.954083 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 14:27:09.954096 | orchestrator | Friday 29 August 2025 14:27:08 +0000 (0:00:01.132) 0:00:27.314 ********* 2025-08-29 14:27:09.954109 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:27:09.954123 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:27:09.954135 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:27:09.954156 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:27:09.954168 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:27:09.954181 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:27:09.954193 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:27:09.954206 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:09.954218 | orchestrator | 2025-08-29 14:27:09.954249 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 14:27:09.954271 | orchestrator | Friday 29 August 2025 14:27:08 +0000 (0:00:00.182) 0:00:27.496 ********* 2025-08-29 14:27:09.954285 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:09.954298 | orchestrator | 2025-08-29 14:27:09.954310 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 14:27:09.954322 | orchestrator | Friday 29 August 2025 14:27:09 +0000 (0:00:00.070) 0:00:27.566 ********* 2025-08-29 14:27:09.954335 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:09.954347 | orchestrator | 2025-08-29 14:27:09.954359 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 14:27:09.954371 | orchestrator | Friday 29 August 2025 14:27:09 +0000 (0:00:00.063) 0:00:27.630 ********* 2025-08-29 14:27:09.954383 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:09.954396 | orchestrator | 2025-08-29 14:27:09.954407 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:27:09.954418 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:27:09.954431 | orchestrator | 2025-08-29 14:27:09.954442 | orchestrator | 2025-08-29 14:27:09.954453 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:27:09.954463 | orchestrator | Friday 29 August 2025 14:27:09 +0000 (0:00:00.552) 0:00:28.182 ********* 2025-08-29 14:27:09.954474 | orchestrator | =============================================================================== 2025-08-29 14:27:09.954485 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.76s 2025-08-29 14:27:09.954495 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.31s 2025-08-29 14:27:09.954508 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-08-29 14:27:09.954519 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-08-29 14:27:09.954530 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 14:27:09.954541 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 14:27:09.954551 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 14:27:09.954562 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 14:27:09.954573 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:27:09.954584 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:27:09.954594 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:27:09.954605 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:27:09.954616 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:27:09.954626 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:27:09.954637 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-08-29 14:27:09.954648 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-08-29 14:27:09.954674 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.55s 2025-08-29 14:27:09.954685 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-08-29 14:27:09.954696 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-08-29 14:27:09.954707 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-08-29 14:27:10.256252 | orchestrator | + osism apply squid 2025-08-29 14:27:22.231733 | orchestrator | 2025-08-29 14:27:22 | INFO  | Task 1d219530-b38c-4af9-9c39-21eea1ace34a (squid) was prepared for execution. 2025-08-29 14:27:22.231947 | orchestrator | 2025-08-29 14:27:22 | INFO  | It takes a moment until task 1d219530-b38c-4af9-9c39-21eea1ace34a (squid) has been started and output is visible here. 2025-08-29 14:29:17.870135 | orchestrator | 2025-08-29 14:29:17.870265 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 14:29:17.870285 | orchestrator | 2025-08-29 14:29:17.870298 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 14:29:17.870310 | orchestrator | Friday 29 August 2025 14:27:26 +0000 (0:00:00.171) 0:00:00.172 ********* 2025-08-29 14:29:17.870321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:29:17.870334 | orchestrator | 2025-08-29 14:29:17.870345 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 14:29:17.870356 | orchestrator | Friday 29 August 2025 14:27:26 +0000 (0:00:00.097) 0:00:00.269 ********* 2025-08-29 14:29:17.870367 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:17.870379 | orchestrator | 2025-08-29 14:29:17.870390 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 14:29:17.870401 | orchestrator | Friday 29 August 2025 14:27:28 +0000 (0:00:01.676) 0:00:01.945 ********* 2025-08-29 14:29:17.870413 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 14:29:17.870423 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 14:29:17.870434 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 14:29:17.870445 | orchestrator | 2025-08-29 14:29:17.870456 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 14:29:17.870467 | orchestrator | Friday 29 August 2025 14:27:29 +0000 (0:00:01.175) 0:00:03.121 ********* 2025-08-29 14:29:17.870478 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 14:29:17.870489 | orchestrator | 2025-08-29 14:29:17.870500 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 14:29:17.870510 | orchestrator | Friday 29 August 2025 14:27:30 +0000 (0:00:01.121) 0:00:04.242 ********* 2025-08-29 14:29:17.870521 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:17.870532 | orchestrator | 2025-08-29 14:29:17.870543 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 14:29:17.870554 | orchestrator | Friday 29 August 2025 14:27:30 +0000 (0:00:00.368) 0:00:04.610 ********* 2025-08-29 14:29:17.870564 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:17.870575 | orchestrator | 2025-08-29 14:29:17.870609 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 14:29:17.870623 | orchestrator | Friday 29 August 2025 14:27:31 +0000 (0:00:00.951) 0:00:05.562 ********* 2025-08-29 14:29:17.870635 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 14:29:17.870648 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:17.870661 | orchestrator | 2025-08-29 14:29:17.870674 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 14:29:17.870686 | orchestrator | Friday 29 August 2025 14:28:04 +0000 (0:00:32.947) 0:00:38.510 ********* 2025-08-29 14:29:17.870698 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:17.870711 | orchestrator | 2025-08-29 14:29:17.870723 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 14:29:17.870736 | orchestrator | Friday 29 August 2025 14:28:16 +0000 (0:00:12.134) 0:00:50.644 ********* 2025-08-29 14:29:17.870748 | orchestrator | Pausing for 60 seconds 2025-08-29 14:29:17.870760 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:17.870795 | orchestrator | 2025-08-29 14:29:17.870808 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 14:29:17.870820 | orchestrator | Friday 29 August 2025 14:29:16 +0000 (0:01:00.081) 0:01:50.726 ********* 2025-08-29 14:29:17.870833 | orchestrator | ok: [testbed-manager] 2025-08-29 14:29:17.870844 | orchestrator | 2025-08-29 14:29:17.870857 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 14:29:17.870895 | orchestrator | Friday 29 August 2025 14:29:16 +0000 (0:00:00.073) 0:01:50.799 ********* 2025-08-29 14:29:17.870908 | orchestrator | changed: [testbed-manager] 2025-08-29 14:29:17.870920 | orchestrator | 2025-08-29 14:29:17.870933 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:29:17.870945 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:29:17.870956 | orchestrator | 2025-08-29 14:29:17.870967 | orchestrator | 2025-08-29 14:29:17.870978 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:29:17.870989 | orchestrator | Friday 29 August 2025 14:29:17 +0000 (0:00:00.678) 0:01:51.478 ********* 2025-08-29 14:29:17.870999 | orchestrator | =============================================================================== 2025-08-29 14:29:17.871010 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-08-29 14:29:17.871021 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.95s 2025-08-29 14:29:17.871032 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.13s 2025-08-29 14:29:17.871042 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.68s 2025-08-29 14:29:17.871053 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-08-29 14:29:17.871064 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2025-08-29 14:29:17.871075 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-08-29 14:29:17.871086 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2025-08-29 14:29:17.871096 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-08-29 14:29:17.871107 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-08-29 14:29:17.871118 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-08-29 14:29:18.166974 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 14:29:18.167066 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-08-29 14:29:18.172178 | orchestrator | ++ semver 9.2.0 9.0.0 2025-08-29 14:29:18.240544 | orchestrator | + [[ 1 -lt 0 ]] 2025-08-29 14:29:18.241081 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 14:29:30.124973 | orchestrator | 2025-08-29 14:29:30 | INFO  | Task 153c98e0-00e4-43e5-a826-a41b609f2384 (operator) was prepared for execution. 2025-08-29 14:29:30.125094 | orchestrator | 2025-08-29 14:29:30 | INFO  | It takes a moment until task 153c98e0-00e4-43e5-a826-a41b609f2384 (operator) has been started and output is visible here. 2025-08-29 14:29:46.410175 | orchestrator | 2025-08-29 14:29:46.410297 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 14:29:46.410316 | orchestrator | 2025-08-29 14:29:46.410328 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:29:46.410340 | orchestrator | Friday 29 August 2025 14:29:34 +0000 (0:00:00.154) 0:00:00.155 ********* 2025-08-29 14:29:46.410367 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:46.410380 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:46.410391 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:46.410402 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:46.410412 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:46.410423 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:46.410434 | orchestrator | 2025-08-29 14:29:46.410445 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 14:29:46.410456 | orchestrator | Friday 29 August 2025 14:29:37 +0000 (0:00:03.693) 0:00:03.848 ********* 2025-08-29 14:29:46.410467 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:46.410478 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:46.410488 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:46.410499 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:46.410534 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:46.410545 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:46.410556 | orchestrator | 2025-08-29 14:29:46.410566 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 14:29:46.410577 | orchestrator | 2025-08-29 14:29:46.410588 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:29:46.410601 | orchestrator | Friday 29 August 2025 14:29:38 +0000 (0:00:00.778) 0:00:04.626 ********* 2025-08-29 14:29:46.410614 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:46.410625 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:46.410637 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:46.410650 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:46.410662 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:46.410673 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:46.410686 | orchestrator | 2025-08-29 14:29:46.410698 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:29:46.410710 | orchestrator | Friday 29 August 2025 14:29:38 +0000 (0:00:00.180) 0:00:04.806 ********* 2025-08-29 14:29:46.410721 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:29:46.410732 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:29:46.410742 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:29:46.410774 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:29:46.410785 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:29:46.410796 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:29:46.410806 | orchestrator | 2025-08-29 14:29:46.410817 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:29:46.410828 | orchestrator | Friday 29 August 2025 14:29:39 +0000 (0:00:00.184) 0:00:04.991 ********* 2025-08-29 14:29:46.410839 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:46.410851 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:46.410862 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:46.410872 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:46.410883 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:46.410893 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:46.410904 | orchestrator | 2025-08-29 14:29:46.410915 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:29:46.410926 | orchestrator | Friday 29 August 2025 14:29:39 +0000 (0:00:00.657) 0:00:05.649 ********* 2025-08-29 14:29:46.410936 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:46.410947 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:46.410958 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:46.410968 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:46.410979 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:46.410989 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:46.411000 | orchestrator | 2025-08-29 14:29:46.411011 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:29:46.411021 | orchestrator | Friday 29 August 2025 14:29:40 +0000 (0:00:00.764) 0:00:06.413 ********* 2025-08-29 14:29:46.411032 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 14:29:46.411043 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 14:29:46.411054 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 14:29:46.411065 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 14:29:46.411075 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 14:29:46.411086 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 14:29:46.411097 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 14:29:46.411107 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 14:29:46.411118 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 14:29:46.411128 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 14:29:46.411139 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 14:29:46.411149 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 14:29:46.411161 | orchestrator | 2025-08-29 14:29:46.411172 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:29:46.411191 | orchestrator | Friday 29 August 2025 14:29:41 +0000 (0:00:01.216) 0:00:07.630 ********* 2025-08-29 14:29:46.411202 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:46.411213 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:46.411228 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:46.411239 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:46.411249 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:46.411260 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:46.411270 | orchestrator | 2025-08-29 14:29:46.411281 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:29:46.411293 | orchestrator | Friday 29 August 2025 14:29:42 +0000 (0:00:01.316) 0:00:08.946 ********* 2025-08-29 14:29:46.411304 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 14:29:46.411314 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 14:29:46.411325 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 14:29:46.411336 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:29:46.411365 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:29:46.411376 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:29:46.411387 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:29:46.411398 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:29:46.411409 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:29:46.411428 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 14:29:46.411439 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 14:29:46.411450 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 14:29:46.411461 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 14:29:46.411472 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 14:29:46.411482 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 14:29:46.411493 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:29:46.411503 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:29:46.411514 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:29:46.411525 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:29:46.411535 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:29:46.411546 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:29:46.411556 | orchestrator | 2025-08-29 14:29:46.411567 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:29:46.411579 | orchestrator | Friday 29 August 2025 14:29:44 +0000 (0:00:01.279) 0:00:10.226 ********* 2025-08-29 14:29:46.411590 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:46.411601 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:46.411611 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:46.411622 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:46.411632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:46.411643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:46.411654 | orchestrator | 2025-08-29 14:29:46.411664 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:29:46.411675 | orchestrator | Friday 29 August 2025 14:29:44 +0000 (0:00:00.180) 0:00:10.406 ********* 2025-08-29 14:29:46.411686 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:46.411697 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:46.411707 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:46.411718 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:46.411735 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:46.411746 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:46.411773 | orchestrator | 2025-08-29 14:29:46.411784 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:29:46.411795 | orchestrator | Friday 29 August 2025 14:29:45 +0000 (0:00:00.583) 0:00:10.990 ********* 2025-08-29 14:29:46.411806 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:46.411817 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:46.411827 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:46.411838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:46.411848 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:46.411859 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:46.411869 | orchestrator | 2025-08-29 14:29:46.411880 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:29:46.411891 | orchestrator | Friday 29 August 2025 14:29:45 +0000 (0:00:00.164) 0:00:11.154 ********* 2025-08-29 14:29:46.411902 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 14:29:46.411913 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 14:29:46.411923 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 14:29:46.411934 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 14:29:46.411945 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 14:29:46.411956 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:46.411967 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:46.411977 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:46.411988 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:46.411999 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:46.412009 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 14:29:46.412020 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:46.412031 | orchestrator | 2025-08-29 14:29:46.412042 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:29:46.412052 | orchestrator | Friday 29 August 2025 14:29:45 +0000 (0:00:00.744) 0:00:11.899 ********* 2025-08-29 14:29:46.412063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:46.412073 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:46.412084 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:46.412095 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:46.412105 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:46.412116 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:46.412127 | orchestrator | 2025-08-29 14:29:46.412138 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:29:46.412149 | orchestrator | Friday 29 August 2025 14:29:46 +0000 (0:00:00.163) 0:00:12.062 ********* 2025-08-29 14:29:46.412159 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:46.412170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:46.412181 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:46.412191 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:46.412202 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:46.412213 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:46.412223 | orchestrator | 2025-08-29 14:29:46.412234 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:29:46.412245 | orchestrator | Friday 29 August 2025 14:29:46 +0000 (0:00:00.157) 0:00:12.219 ********* 2025-08-29 14:29:46.412256 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:46.412267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:46.412277 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:46.412288 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:46.412306 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:47.642471 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:47.642578 | orchestrator | 2025-08-29 14:29:47.642596 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:29:47.642609 | orchestrator | Friday 29 August 2025 14:29:46 +0000 (0:00:00.164) 0:00:12.384 ********* 2025-08-29 14:29:47.642647 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:29:47.642674 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:29:47.642687 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:29:47.642706 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:29:47.642724 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:29:47.642740 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:29:47.642841 | orchestrator | 2025-08-29 14:29:47.642859 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:29:47.642870 | orchestrator | Friday 29 August 2025 14:29:47 +0000 (0:00:00.688) 0:00:13.072 ********* 2025-08-29 14:29:47.642881 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:29:47.642892 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:29:47.642902 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:29:47.642912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:29:47.642923 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:29:47.642933 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:29:47.642943 | orchestrator | 2025-08-29 14:29:47.642954 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:29:47.642966 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:29:47.642979 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:29:47.642992 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:29:47.643012 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:29:47.643030 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:29:47.643048 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:29:47.643066 | orchestrator | 2025-08-29 14:29:47.643086 | orchestrator | 2025-08-29 14:29:47.643108 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:29:47.643129 | orchestrator | Friday 29 August 2025 14:29:47 +0000 (0:00:00.229) 0:00:13.301 ********* 2025-08-29 14:29:47.643148 | orchestrator | =============================================================================== 2025-08-29 14:29:47.643167 | orchestrator | Gathering Facts --------------------------------------------------------- 3.69s 2025-08-29 14:29:47.643185 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2025-08-29 14:29:47.643205 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-08-29 14:29:47.643225 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2025-08-29 14:29:47.643242 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-08-29 14:29:47.643255 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2025-08-29 14:29:47.643266 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-08-29 14:29:47.643279 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-08-29 14:29:47.643291 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2025-08-29 14:29:47.643303 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-08-29 14:29:47.643315 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-08-29 14:29:47.643326 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-08-29 14:29:47.643353 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-08-29 14:29:47.643365 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-08-29 14:29:47.643378 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-08-29 14:29:47.643389 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-08-29 14:29:47.643399 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-08-29 14:29:47.643410 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-08-29 14:29:47.957591 | orchestrator | + osism apply --environment custom facts 2025-08-29 14:29:49.770335 | orchestrator | 2025-08-29 14:29:49 | INFO  | Trying to run play facts in environment custom 2025-08-29 14:29:59.880340 | orchestrator | 2025-08-29 14:29:59 | INFO  | Task da34179a-831d-471e-9718-742cce7cd8cf (facts) was prepared for execution. 2025-08-29 14:29:59.880453 | orchestrator | 2025-08-29 14:29:59 | INFO  | It takes a moment until task da34179a-831d-471e-9718-742cce7cd8cf (facts) has been started and output is visible here. 2025-08-29 14:30:43.441785 | orchestrator | 2025-08-29 14:30:43.441910 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 14:30:43.441928 | orchestrator | 2025-08-29 14:30:43.441940 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:30:43.441952 | orchestrator | Friday 29 August 2025 14:30:03 +0000 (0:00:00.086) 0:00:00.086 ********* 2025-08-29 14:30:43.441963 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:43.441976 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:43.441988 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:43.441999 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:43.442010 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:43.442128 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:43.442149 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:43.442170 | orchestrator | 2025-08-29 14:30:43.442189 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 14:30:43.442208 | orchestrator | Friday 29 August 2025 14:30:05 +0000 (0:00:01.422) 0:00:01.509 ********* 2025-08-29 14:30:43.442220 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:43.442231 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:43.442242 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:43.442253 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:43.442265 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:43.442277 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:43.442289 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:43.442301 | orchestrator | 2025-08-29 14:30:43.442313 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 14:30:43.442326 | orchestrator | 2025-08-29 14:30:43.442338 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:30:43.442350 | orchestrator | Friday 29 August 2025 14:30:06 +0000 (0:00:01.202) 0:00:02.711 ********* 2025-08-29 14:30:43.442362 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.442374 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.442386 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.442398 | orchestrator | 2025-08-29 14:30:43.442410 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:30:43.442423 | orchestrator | Friday 29 August 2025 14:30:06 +0000 (0:00:00.106) 0:00:02.818 ********* 2025-08-29 14:30:43.442435 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.442447 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.442459 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.442474 | orchestrator | 2025-08-29 14:30:43.442493 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:30:43.442511 | orchestrator | Friday 29 August 2025 14:30:06 +0000 (0:00:00.216) 0:00:03.034 ********* 2025-08-29 14:30:43.442528 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.442575 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.442597 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.442616 | orchestrator | 2025-08-29 14:30:43.442635 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:30:43.442653 | orchestrator | Friday 29 August 2025 14:30:06 +0000 (0:00:00.217) 0:00:03.251 ********* 2025-08-29 14:30:43.442673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:30:43.442693 | orchestrator | 2025-08-29 14:30:43.442712 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:30:43.442760 | orchestrator | Friday 29 August 2025 14:30:07 +0000 (0:00:00.154) 0:00:03.406 ********* 2025-08-29 14:30:43.442774 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.442786 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.442806 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.442823 | orchestrator | 2025-08-29 14:30:43.442842 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:30:43.442862 | orchestrator | Friday 29 August 2025 14:30:07 +0000 (0:00:00.488) 0:00:03.894 ********* 2025-08-29 14:30:43.442881 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:43.442900 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:43.442918 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:43.442936 | orchestrator | 2025-08-29 14:30:43.442955 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:30:43.442974 | orchestrator | Friday 29 August 2025 14:30:07 +0000 (0:00:00.117) 0:00:04.012 ********* 2025-08-29 14:30:43.442993 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:43.443011 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:43.443025 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:43.443035 | orchestrator | 2025-08-29 14:30:43.443046 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:30:43.443057 | orchestrator | Friday 29 August 2025 14:30:08 +0000 (0:00:01.047) 0:00:05.059 ********* 2025-08-29 14:30:43.443067 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.443078 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.443089 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.443099 | orchestrator | 2025-08-29 14:30:43.443110 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:30:43.443120 | orchestrator | Friday 29 August 2025 14:30:09 +0000 (0:00:00.460) 0:00:05.520 ********* 2025-08-29 14:30:43.443131 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:43.443141 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:43.443152 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:43.443162 | orchestrator | 2025-08-29 14:30:43.443173 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:30:43.443184 | orchestrator | Friday 29 August 2025 14:30:10 +0000 (0:00:01.057) 0:00:06.578 ********* 2025-08-29 14:30:43.443194 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:43.443205 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:43.443215 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:43.443226 | orchestrator | 2025-08-29 14:30:43.443236 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 14:30:43.443247 | orchestrator | Friday 29 August 2025 14:30:27 +0000 (0:00:16.936) 0:00:23.517 ********* 2025-08-29 14:30:43.443257 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:43.443268 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:43.443279 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:43.443289 | orchestrator | 2025-08-29 14:30:43.443300 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 14:30:43.443332 | orchestrator | Friday 29 August 2025 14:30:27 +0000 (0:00:00.115) 0:00:23.632 ********* 2025-08-29 14:30:43.443344 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:43.443354 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:43.443376 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:43.443387 | orchestrator | 2025-08-29 14:30:43.443398 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:30:43.443417 | orchestrator | Friday 29 August 2025 14:30:34 +0000 (0:00:07.251) 0:00:30.884 ********* 2025-08-29 14:30:43.443428 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.443439 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.443449 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.443460 | orchestrator | 2025-08-29 14:30:43.443471 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:30:43.443482 | orchestrator | Friday 29 August 2025 14:30:35 +0000 (0:00:00.445) 0:00:31.329 ********* 2025-08-29 14:30:43.443492 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 14:30:43.443503 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 14:30:43.443514 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 14:30:43.443524 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 14:30:43.443535 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 14:30:43.443546 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 14:30:43.443556 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 14:30:43.443567 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 14:30:43.443577 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 14:30:43.443588 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:30:43.443599 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:30:43.443609 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:30:43.443620 | orchestrator | 2025-08-29 14:30:43.443631 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:30:43.443642 | orchestrator | Friday 29 August 2025 14:30:38 +0000 (0:00:03.333) 0:00:34.663 ********* 2025-08-29 14:30:43.443652 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.443663 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.443673 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.443684 | orchestrator | 2025-08-29 14:30:43.443694 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:30:43.443705 | orchestrator | 2025-08-29 14:30:43.443716 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:30:43.443758 | orchestrator | Friday 29 August 2025 14:30:39 +0000 (0:00:01.158) 0:00:35.822 ********* 2025-08-29 14:30:43.443770 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:30:43.443780 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:30:43.443791 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:30:43.443801 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:43.443812 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:43.443822 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:43.443833 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:43.443843 | orchestrator | 2025-08-29 14:30:43.443854 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:30:43.443865 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:30:43.443877 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:30:43.443890 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:30:43.443901 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:30:43.443912 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:30:43.443930 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:30:43.443940 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:30:43.443951 | orchestrator | 2025-08-29 14:30:43.443962 | orchestrator | 2025-08-29 14:30:43.443973 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:30:43.443983 | orchestrator | Friday 29 August 2025 14:30:43 +0000 (0:00:03.924) 0:00:39.746 ********* 2025-08-29 14:30:43.443994 | orchestrator | =============================================================================== 2025-08-29 14:30:43.444004 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.94s 2025-08-29 14:30:43.444015 | orchestrator | Install required packages (Debian) -------------------------------------- 7.25s 2025-08-29 14:30:43.444026 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.92s 2025-08-29 14:30:43.444036 | orchestrator | Copy fact files --------------------------------------------------------- 3.33s 2025-08-29 14:30:43.444047 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2025-08-29 14:30:43.444058 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2025-08-29 14:30:43.444078 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.16s 2025-08-29 14:30:43.669124 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-08-29 14:30:43.669217 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-08-29 14:30:43.669229 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2025-08-29 14:30:43.669239 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-08-29 14:30:43.669248 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-08-29 14:30:43.669258 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2025-08-29 14:30:43.669267 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-08-29 14:30:43.669277 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-08-29 14:30:43.669287 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-08-29 14:30:43.669297 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-08-29 14:30:43.669306 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-08-29 14:30:44.005935 | orchestrator | + osism apply bootstrap 2025-08-29 14:30:56.019074 | orchestrator | 2025-08-29 14:30:56 | INFO  | Task ab2a7618-6dd9-4b02-9cf9-cb69319855e2 (bootstrap) was prepared for execution. 2025-08-29 14:30:56.019190 | orchestrator | 2025-08-29 14:30:56 | INFO  | It takes a moment until task ab2a7618-6dd9-4b02-9cf9-cb69319855e2 (bootstrap) has been started and output is visible here. 2025-08-29 14:31:11.964206 | orchestrator | 2025-08-29 14:31:11.964363 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:31:11.964391 | orchestrator | 2025-08-29 14:31:11.964411 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:31:11.964429 | orchestrator | Friday 29 August 2025 14:31:00 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-08-29 14:31:11.964460 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:11.964474 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:11.964485 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:11.964499 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:11.964516 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:11.964535 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:11.964553 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:11.964606 | orchestrator | 2025-08-29 14:31:11.964630 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:31:11.964649 | orchestrator | 2025-08-29 14:31:11.964668 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:31:11.964679 | orchestrator | Friday 29 August 2025 14:31:00 +0000 (0:00:00.266) 0:00:00.432 ********* 2025-08-29 14:31:11.964690 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:11.964701 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:11.964793 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:11.964808 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:11.964821 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:11.964833 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:11.964845 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:11.964857 | orchestrator | 2025-08-29 14:31:11.964870 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 14:31:11.964882 | orchestrator | 2025-08-29 14:31:11.964894 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:31:11.964907 | orchestrator | Friday 29 August 2025 14:31:04 +0000 (0:00:03.795) 0:00:04.228 ********* 2025-08-29 14:31:11.964920 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:31:11.964933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 14:31:11.964945 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:31:11.964957 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 14:31:11.964970 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 14:31:11.964983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:31:11.964994 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 14:31:11.965005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 14:31:11.965015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 14:31:11.965026 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 14:31:11.965036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:31:11.965047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 14:31:11.965058 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 14:31:11.965069 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 14:31:11.965080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 14:31:11.965090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 14:31:11.965101 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:11.965112 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:31:11.965123 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 14:31:11.965133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 14:31:11.965144 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:31:11.965155 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 14:31:11.965166 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 14:31:11.965176 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:31:11.965187 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:11.965198 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 14:31:11.965208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 14:31:11.965219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 14:31:11.965230 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 14:31:11.965240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 14:31:11.965258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 14:31:11.965278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 14:31:11.965289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 14:31:11.965300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 14:31:11.965310 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 14:31:11.965321 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:11.965332 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 14:31:11.965343 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:11.965353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 14:31:11.965364 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 14:31:11.965375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 14:31:11.965385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 14:31:11.965396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 14:31:11.965406 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 14:31:11.965417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 14:31:11.965428 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 14:31:11.965439 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:11.965472 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 14:31:11.965484 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 14:31:11.965495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 14:31:11.965505 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 14:31:11.965516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 14:31:11.965527 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:11.965538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 14:31:11.965548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 14:31:11.965559 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:11.965570 | orchestrator | 2025-08-29 14:31:11.965581 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 14:31:11.965591 | orchestrator | 2025-08-29 14:31:11.965602 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 14:31:11.965613 | orchestrator | Friday 29 August 2025 14:31:04 +0000 (0:00:00.461) 0:00:04.690 ********* 2025-08-29 14:31:11.965624 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:11.965634 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:11.965645 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:11.965655 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:11.965666 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:11.965677 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:11.965687 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:11.965698 | orchestrator | 2025-08-29 14:31:11.965709 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 14:31:11.965744 | orchestrator | Friday 29 August 2025 14:31:05 +0000 (0:00:01.190) 0:00:05.881 ********* 2025-08-29 14:31:11.965779 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:11.965790 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:11.965800 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:11.965810 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:11.965821 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:11.965832 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:11.965842 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:11.965853 | orchestrator | 2025-08-29 14:31:11.965864 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 14:31:11.965875 | orchestrator | Friday 29 August 2025 14:31:07 +0000 (0:00:01.148) 0:00:07.029 ********* 2025-08-29 14:31:11.965887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:11.965908 | orchestrator | 2025-08-29 14:31:11.965919 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 14:31:11.965930 | orchestrator | Friday 29 August 2025 14:31:07 +0000 (0:00:00.276) 0:00:07.305 ********* 2025-08-29 14:31:11.965941 | orchestrator | changed: [testbed-manager] 2025-08-29 14:31:11.965952 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:11.965963 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:11.965988 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:11.965999 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:11.966010 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:11.966123 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:11.966146 | orchestrator | 2025-08-29 14:31:11.966165 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 14:31:11.966185 | orchestrator | Friday 29 August 2025 14:31:09 +0000 (0:00:02.117) 0:00:09.423 ********* 2025-08-29 14:31:11.966202 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:11.966220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:11.966233 | orchestrator | 2025-08-29 14:31:11.966244 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 14:31:11.966254 | orchestrator | Friday 29 August 2025 14:31:09 +0000 (0:00:00.283) 0:00:09.707 ********* 2025-08-29 14:31:11.966265 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:11.966276 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:11.966286 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:11.966296 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:11.966307 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:11.966317 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:11.966328 | orchestrator | 2025-08-29 14:31:11.966339 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 14:31:11.966349 | orchestrator | Friday 29 August 2025 14:31:10 +0000 (0:00:01.070) 0:00:10.777 ********* 2025-08-29 14:31:11.966360 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:11.966370 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:11.966381 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:11.966392 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:11.966402 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:11.966412 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:11.966423 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:11.966433 | orchestrator | 2025-08-29 14:31:11.966444 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 14:31:11.966454 | orchestrator | Friday 29 August 2025 14:31:11 +0000 (0:00:00.590) 0:00:11.368 ********* 2025-08-29 14:31:11.966465 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:11.966475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:11.966486 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:11.966496 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:11.966507 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:11.966517 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:11.966528 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:11.966538 | orchestrator | 2025-08-29 14:31:11.966549 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:31:11.966561 | orchestrator | Friday 29 August 2025 14:31:11 +0000 (0:00:00.403) 0:00:11.772 ********* 2025-08-29 14:31:11.966571 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:11.966582 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:11.966603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.982945 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.983074 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:23.983088 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:23.983125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:23.983134 | orchestrator | 2025-08-29 14:31:23.983146 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:31:23.983157 | orchestrator | Friday 29 August 2025 14:31:12 +0000 (0:00:00.263) 0:00:12.035 ********* 2025-08-29 14:31:23.983169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:23.983197 | orchestrator | 2025-08-29 14:31:23.983208 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:31:23.983218 | orchestrator | Friday 29 August 2025 14:31:12 +0000 (0:00:00.296) 0:00:12.331 ********* 2025-08-29 14:31:23.983228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:23.983238 | orchestrator | 2025-08-29 14:31:23.983246 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:31:23.983255 | orchestrator | Friday 29 August 2025 14:31:12 +0000 (0:00:00.298) 0:00:12.630 ********* 2025-08-29 14:31:23.983264 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.983274 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.983283 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.983292 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.983301 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.983311 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.983320 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.983329 | orchestrator | 2025-08-29 14:31:23.983339 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:31:23.983347 | orchestrator | Friday 29 August 2025 14:31:13 +0000 (0:00:01.219) 0:00:13.850 ********* 2025-08-29 14:31:23.983357 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.983367 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:23.983377 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.983388 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.983398 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:23.983408 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:23.983418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:23.983428 | orchestrator | 2025-08-29 14:31:23.983438 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:31:23.983449 | orchestrator | Friday 29 August 2025 14:31:14 +0000 (0:00:00.216) 0:00:14.067 ********* 2025-08-29 14:31:23.983459 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.983469 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.983480 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.983490 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.983500 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.983510 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.983520 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.983530 | orchestrator | 2025-08-29 14:31:23.983540 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:31:23.983550 | orchestrator | Friday 29 August 2025 14:31:14 +0000 (0:00:00.584) 0:00:14.651 ********* 2025-08-29 14:31:23.983560 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.983570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:23.983580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.983589 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.983648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:23.983659 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:23.983669 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:23.983679 | orchestrator | 2025-08-29 14:31:23.983689 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:31:23.983765 | orchestrator | Friday 29 August 2025 14:31:14 +0000 (0:00:00.243) 0:00:14.894 ********* 2025-08-29 14:31:23.983775 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.983784 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:23.983793 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:23.983802 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:23.983810 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:23.983819 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:23.983832 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:23.983841 | orchestrator | 2025-08-29 14:31:23.983850 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:31:23.983859 | orchestrator | Friday 29 August 2025 14:31:15 +0000 (0:00:00.569) 0:00:15.464 ********* 2025-08-29 14:31:23.983867 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.983876 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:23.983885 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:23.983893 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:23.983901 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:23.983910 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:23.983919 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:23.983928 | orchestrator | 2025-08-29 14:31:23.983937 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:31:23.983945 | orchestrator | Friday 29 August 2025 14:31:16 +0000 (0:00:01.131) 0:00:16.595 ********* 2025-08-29 14:31:23.983954 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.983962 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.983971 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.983979 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.983988 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.983996 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.984005 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984013 | orchestrator | 2025-08-29 14:31:23.984022 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:31:23.984031 | orchestrator | Friday 29 August 2025 14:31:17 +0000 (0:00:01.183) 0:00:17.778 ********* 2025-08-29 14:31:23.984059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:23.984069 | orchestrator | 2025-08-29 14:31:23.984077 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:31:23.984086 | orchestrator | Friday 29 August 2025 14:31:18 +0000 (0:00:00.400) 0:00:18.179 ********* 2025-08-29 14:31:23.984095 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.984104 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:23.984112 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:23.984121 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:23.984129 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:23.984138 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:23.984146 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:23.984155 | orchestrator | 2025-08-29 14:31:23.984163 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:31:23.984172 | orchestrator | Friday 29 August 2025 14:31:19 +0000 (0:00:01.295) 0:00:19.474 ********* 2025-08-29 14:31:23.984180 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984189 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.984197 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.984206 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.984214 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984223 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984231 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.984240 | orchestrator | 2025-08-29 14:31:23.984249 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:31:23.984258 | orchestrator | Friday 29 August 2025 14:31:19 +0000 (0:00:00.226) 0:00:19.701 ********* 2025-08-29 14:31:23.984273 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984282 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.984291 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.984299 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.984308 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984316 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984325 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.984333 | orchestrator | 2025-08-29 14:31:23.984342 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:31:23.984351 | orchestrator | Friday 29 August 2025 14:31:19 +0000 (0:00:00.232) 0:00:19.934 ********* 2025-08-29 14:31:23.984359 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984368 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.984376 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.984385 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.984393 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984402 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984410 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.984419 | orchestrator | 2025-08-29 14:31:23.984428 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:31:23.984436 | orchestrator | Friday 29 August 2025 14:31:20 +0000 (0:00:00.217) 0:00:20.152 ********* 2025-08-29 14:31:23.984446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:23.984457 | orchestrator | 2025-08-29 14:31:23.984466 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:31:23.984475 | orchestrator | Friday 29 August 2025 14:31:20 +0000 (0:00:00.288) 0:00:20.441 ********* 2025-08-29 14:31:23.984483 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984492 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.984500 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.984509 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.984517 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984526 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.984534 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984542 | orchestrator | 2025-08-29 14:31:23.984551 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:31:23.984559 | orchestrator | Friday 29 August 2025 14:31:20 +0000 (0:00:00.513) 0:00:20.954 ********* 2025-08-29 14:31:23.984568 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:31:23.984577 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:31:23.984585 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:31:23.984594 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:31:23.984602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:23.984611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:23.984619 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:23.984628 | orchestrator | 2025-08-29 14:31:23.984641 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:31:23.984650 | orchestrator | Friday 29 August 2025 14:31:21 +0000 (0:00:00.267) 0:00:21.222 ********* 2025-08-29 14:31:23.984659 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984667 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:23.984676 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:23.984684 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:23.984693 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984701 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984734 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.984743 | orchestrator | 2025-08-29 14:31:23.984752 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:31:23.984761 | orchestrator | Friday 29 August 2025 14:31:22 +0000 (0:00:01.058) 0:00:22.281 ********* 2025-08-29 14:31:23.984769 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984784 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:23.984792 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:23.984801 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:23.984809 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984818 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:23.984826 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:23.984835 | orchestrator | 2025-08-29 14:31:23.984843 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:31:23.984852 | orchestrator | Friday 29 August 2025 14:31:22 +0000 (0:00:00.572) 0:00:22.854 ********* 2025-08-29 14:31:23.984861 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:23.984869 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:23.984878 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:23.984887 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:23.984901 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.913032 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913143 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913156 | orchestrator | 2025-08-29 14:32:05.913168 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:32:05.913178 | orchestrator | Friday 29 August 2025 14:31:23 +0000 (0:00:01.085) 0:00:23.940 ********* 2025-08-29 14:32:05.913187 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913196 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913204 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.913214 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:05.913224 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.913233 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.913242 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.913251 | orchestrator | 2025-08-29 14:32:05.913260 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 14:32:05.913269 | orchestrator | Friday 29 August 2025 14:31:41 +0000 (0:00:18.006) 0:00:41.946 ********* 2025-08-29 14:32:05.913278 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.913287 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.913296 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.913304 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.913313 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.913321 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913330 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913338 | orchestrator | 2025-08-29 14:32:05.913347 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 14:32:05.913356 | orchestrator | Friday 29 August 2025 14:31:42 +0000 (0:00:00.226) 0:00:42.173 ********* 2025-08-29 14:32:05.913365 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.913373 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.913382 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.913390 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.913399 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.913408 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913416 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913425 | orchestrator | 2025-08-29 14:32:05.913434 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 14:32:05.913442 | orchestrator | Friday 29 August 2025 14:31:42 +0000 (0:00:00.234) 0:00:42.407 ********* 2025-08-29 14:32:05.913451 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.913460 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.913468 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.913477 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.913485 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.913494 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913503 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913511 | orchestrator | 2025-08-29 14:32:05.913520 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 14:32:05.913529 | orchestrator | Friday 29 August 2025 14:31:42 +0000 (0:00:00.230) 0:00:42.637 ********* 2025-08-29 14:32:05.913540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:05.913585 | orchestrator | 2025-08-29 14:32:05.913602 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 14:32:05.913617 | orchestrator | Friday 29 August 2025 14:31:42 +0000 (0:00:00.293) 0:00:42.931 ********* 2025-08-29 14:32:05.913630 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.913645 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.913659 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913673 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913710 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.913725 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.913737 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.913749 | orchestrator | 2025-08-29 14:32:05.913764 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 14:32:05.913779 | orchestrator | Friday 29 August 2025 14:31:44 +0000 (0:00:01.624) 0:00:44.555 ********* 2025-08-29 14:32:05.913794 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:05.913809 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.913826 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.913841 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.913857 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.913871 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.913881 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.913891 | orchestrator | 2025-08-29 14:32:05.913901 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 14:32:05.913910 | orchestrator | Friday 29 August 2025 14:31:45 +0000 (0:00:01.027) 0:00:45.582 ********* 2025-08-29 14:32:05.913919 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.913927 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.913936 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.913944 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.913953 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.913961 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.913969 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.913978 | orchestrator | 2025-08-29 14:32:05.913986 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 14:32:05.913994 | orchestrator | Friday 29 August 2025 14:31:46 +0000 (0:00:00.800) 0:00:46.383 ********* 2025-08-29 14:32:05.914005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:05.914074 | orchestrator | 2025-08-29 14:32:05.914086 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 14:32:05.914096 | orchestrator | Friday 29 August 2025 14:31:46 +0000 (0:00:00.310) 0:00:46.694 ********* 2025-08-29 14:32:05.914104 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.914113 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:05.914121 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.914129 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.914138 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.914146 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.914155 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.914163 | orchestrator | 2025-08-29 14:32:05.914189 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 14:32:05.914198 | orchestrator | Friday 29 August 2025 14:31:47 +0000 (0:00:00.977) 0:00:47.672 ********* 2025-08-29 14:32:05.914207 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:05.914215 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:05.914224 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:05.914232 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:05.914241 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:05.914261 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:05.914269 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:05.914278 | orchestrator | 2025-08-29 14:32:05.914286 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 14:32:05.914295 | orchestrator | Friday 29 August 2025 14:31:47 +0000 (0:00:00.280) 0:00:47.952 ********* 2025-08-29 14:32:05.914303 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.914312 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.914320 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.914328 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.914337 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.914345 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.914353 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:05.914362 | orchestrator | 2025-08-29 14:32:05.914370 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 14:32:05.914379 | orchestrator | Friday 29 August 2025 14:32:00 +0000 (0:00:12.758) 0:01:00.710 ********* 2025-08-29 14:32:05.914387 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.914396 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.914404 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.914412 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.914421 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.914429 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.914437 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.914446 | orchestrator | 2025-08-29 14:32:05.914454 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 14:32:05.914463 | orchestrator | Friday 29 August 2025 14:32:01 +0000 (0:00:00.918) 0:01:01.629 ********* 2025-08-29 14:32:05.914471 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.914480 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.914488 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.914496 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.914504 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.914512 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.914521 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.914529 | orchestrator | 2025-08-29 14:32:05.914538 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 14:32:05.914546 | orchestrator | Friday 29 August 2025 14:32:02 +0000 (0:00:00.909) 0:01:02.538 ********* 2025-08-29 14:32:05.914554 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.914563 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.914571 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.914579 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.914588 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.914596 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.914619 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.914628 | orchestrator | 2025-08-29 14:32:05.914637 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 14:32:05.914645 | orchestrator | Friday 29 August 2025 14:32:02 +0000 (0:00:00.225) 0:01:02.763 ********* 2025-08-29 14:32:05.914654 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.914662 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.914671 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.914679 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.914713 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.914722 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.914730 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.914738 | orchestrator | 2025-08-29 14:32:05.914747 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 14:32:05.914755 | orchestrator | Friday 29 August 2025 14:32:03 +0000 (0:00:00.216) 0:01:02.980 ********* 2025-08-29 14:32:05.914764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:05.914780 | orchestrator | 2025-08-29 14:32:05.914789 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 14:32:05.914797 | orchestrator | Friday 29 August 2025 14:32:03 +0000 (0:00:00.303) 0:01:03.283 ********* 2025-08-29 14:32:05.914806 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.914819 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.914827 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.914836 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.914844 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.914853 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.914861 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.914869 | orchestrator | 2025-08-29 14:32:05.914878 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 14:32:05.914886 | orchestrator | Friday 29 August 2025 14:32:05 +0000 (0:00:01.702) 0:01:04.986 ********* 2025-08-29 14:32:05.914895 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:05.914903 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.914912 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.914920 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.914928 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.914937 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.914945 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.914953 | orchestrator | 2025-08-29 14:32:05.914962 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 14:32:05.914971 | orchestrator | Friday 29 August 2025 14:32:05 +0000 (0:00:00.635) 0:01:05.622 ********* 2025-08-29 14:32:05.914979 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.914987 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.914996 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.915004 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.915013 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.915021 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.915030 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.915038 | orchestrator | 2025-08-29 14:32:05.915046 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 14:32:05.915067 | orchestrator | Friday 29 August 2025 14:32:05 +0000 (0:00:00.244) 0:01:05.866 ********* 2025-08-29 14:34:28.048226 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:28.048358 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:28.048371 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:28.048381 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:28.048390 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:28.048399 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:28.048408 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:28.048418 | orchestrator | 2025-08-29 14:34:28.048428 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 14:34:28.048448 | orchestrator | Friday 29 August 2025 14:32:07 +0000 (0:00:01.235) 0:01:07.101 ********* 2025-08-29 14:34:28.048458 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:28.048468 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:28.048477 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:28.048486 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:28.048495 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:28.048505 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:28.048514 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:28.048523 | orchestrator | 2025-08-29 14:34:28.048532 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 14:34:28.048541 | orchestrator | Friday 29 August 2025 14:32:08 +0000 (0:00:01.747) 0:01:08.849 ********* 2025-08-29 14:34:28.048550 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:28.048559 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:28.048568 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:28.048576 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:28.048585 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:28.048594 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:28.048656 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:28.048666 | orchestrator | 2025-08-29 14:34:28.048675 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 14:34:28.048684 | orchestrator | Friday 29 August 2025 14:32:11 +0000 (0:00:02.489) 0:01:11.339 ********* 2025-08-29 14:34:28.048692 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:28.048701 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:28.048710 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:28.048718 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:28.048727 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:28.048736 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:28.048746 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:28.048755 | orchestrator | 2025-08-29 14:34:28.048765 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 14:34:28.048776 | orchestrator | Friday 29 August 2025 14:32:50 +0000 (0:00:38.663) 0:01:50.003 ********* 2025-08-29 14:34:28.048786 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:28.048796 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:28.048806 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:28.048816 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:28.048827 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:28.048837 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:28.048847 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:28.048857 | orchestrator | 2025-08-29 14:34:28.048867 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 14:34:28.048878 | orchestrator | Friday 29 August 2025 14:34:06 +0000 (0:01:16.697) 0:03:06.701 ********* 2025-08-29 14:34:28.048888 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:28.048897 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:28.048907 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:28.048918 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:28.048928 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:28.048938 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:28.048953 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:28.048968 | orchestrator | 2025-08-29 14:34:28.048984 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 14:34:28.049006 | orchestrator | Friday 29 August 2025 14:34:08 +0000 (0:00:01.744) 0:03:08.445 ********* 2025-08-29 14:34:28.049027 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:28.049042 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:28.049056 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:28.049071 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:28.049087 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:28.049102 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:28.049118 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:28.049135 | orchestrator | 2025-08-29 14:34:28.049151 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 14:34:28.049168 | orchestrator | Friday 29 August 2025 14:34:21 +0000 (0:00:12.913) 0:03:21.358 ********* 2025-08-29 14:34:28.049199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 14:34:28.049219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 14:34:28.049265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 14:34:28.049277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 14:34:28.049286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 14:34:28.049298 | orchestrator | 2025-08-29 14:34:28.049313 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 14:34:28.049334 | orchestrator | Friday 29 August 2025 14:34:21 +0000 (0:00:00.427) 0:03:21.785 ********* 2025-08-29 14:34:28.049350 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:34:28.049364 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:28.049380 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:34:28.049395 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:34:28.049410 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:28.049424 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:28.049433 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:34:28.049442 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:28.049451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:34:28.049459 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:34:28.049468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:34:28.049476 | orchestrator | 2025-08-29 14:34:28.049485 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 14:34:28.049494 | orchestrator | Friday 29 August 2025 14:34:23 +0000 (0:00:01.595) 0:03:23.381 ********* 2025-08-29 14:34:28.049502 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:34:28.049512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:34:28.049521 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:34:28.049529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:34:28.049538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:34:28.049546 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:34:28.049555 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:34:28.049563 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:34:28.049572 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:34:28.049581 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:34:28.049598 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:28.049607 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:34:28.049650 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:34:28.049661 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:34:28.049670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:34:28.049679 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:34:28.049694 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:34:28.049714 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:34:28.049730 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:34:28.049744 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:34:28.049757 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:34:28.049772 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:34:28.049794 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:34:30.234520 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:34:30.234703 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:34:30.234727 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:34:30.234747 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:34:30.234765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:30.234784 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:34:30.234802 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:34:30.234821 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:34:30.234839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:34:30.234859 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:30.234877 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:34:30.234896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:34:30.234915 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:34:30.234934 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:34:30.234954 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:34:30.234973 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:34:30.234992 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:34:30.235013 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:34:30.235033 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:34:30.235054 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:34:30.235075 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:30.235130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:34:30.235152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:34:30.235172 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:34:30.235190 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:34:30.235209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:34:30.235227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:34:30.235247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:34:30.235269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:34:30.235289 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:34:30.235342 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:34:30.235366 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:34:30.235385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:34:30.235403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:34:30.235423 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:34:30.235442 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:34:30.235461 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:34:30.235481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:34:30.235501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:34:30.235521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:34:30.235541 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:34:30.235561 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:34:30.235579 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:34:30.235654 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:34:30.235679 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:34:30.235699 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:34:30.235719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:34:30.235738 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:34:30.235757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:34:30.235777 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:34:30.235796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:34:30.235816 | orchestrator | 2025-08-29 14:34:30.235839 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 14:34:30.235859 | orchestrator | Friday 29 August 2025 14:34:28 +0000 (0:00:04.619) 0:03:28.000 ********* 2025-08-29 14:34:30.235896 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.235916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.235935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.235954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.235974 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.235993 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.236012 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:34:30.236031 | orchestrator | 2025-08-29 14:34:30.236050 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 14:34:30.236070 | orchestrator | Friday 29 August 2025 14:34:28 +0000 (0:00:00.635) 0:03:28.636 ********* 2025-08-29 14:34:30.236096 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:34:30.236117 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:34:30.236136 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:34:30.236156 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:30.236175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:30.236193 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:34:30.236212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:30.236231 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:30.236250 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:34:30.236268 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:34:30.236288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:34:30.236307 | orchestrator | 2025-08-29 14:34:30.236326 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 14:34:30.236345 | orchestrator | Friday 29 August 2025 14:34:29 +0000 (0:00:00.598) 0:03:29.234 ********* 2025-08-29 14:34:30.236364 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:34:30.236394 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:34:30.236415 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:30.236434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:30.236454 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:34:30.236474 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:30.236495 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:34:30.236514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:30.236534 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:34:30.236554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:34:30.236574 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:34:30.236593 | orchestrator | 2025-08-29 14:34:30.236612 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 14:34:30.236663 | orchestrator | Friday 29 August 2025 14:34:29 +0000 (0:00:00.666) 0:03:29.901 ********* 2025-08-29 14:34:30.236682 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:30.236700 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:30.236732 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:30.236750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:30.236768 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:30.236787 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:30.236804 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:30.236842 | orchestrator | 2025-08-29 14:34:30.236881 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 14:34:42.304031 | orchestrator | Friday 29 August 2025 14:34:30 +0000 (0:00:00.294) 0:03:30.195 ********* 2025-08-29 14:34:42.304142 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:42.304154 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:42.304161 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:42.304168 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:42.304175 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:42.304181 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:42.304188 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:42.304194 | orchestrator | 2025-08-29 14:34:42.304202 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 14:34:42.304210 | orchestrator | Friday 29 August 2025 14:34:35 +0000 (0:00:05.589) 0:03:35.784 ********* 2025-08-29 14:34:42.304216 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 14:34:42.304223 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 14:34:42.304230 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:34:42.304238 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 14:34:42.304244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:34:42.304251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:34:42.304258 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 14:34:42.304264 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 14:34:42.304270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:34:42.304277 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:34:42.304285 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 14:34:42.304291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:34:42.304297 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 14:34:42.304302 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:34:42.304307 | orchestrator | 2025-08-29 14:34:42.304313 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 14:34:42.304320 | orchestrator | Friday 29 August 2025 14:34:36 +0000 (0:00:00.337) 0:03:36.122 ********* 2025-08-29 14:34:42.304327 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 14:34:42.304334 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 14:34:42.304340 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 14:34:42.304346 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 14:34:42.304352 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 14:34:42.304358 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 14:34:42.304364 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 14:34:42.304371 | orchestrator | 2025-08-29 14:34:42.304379 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 14:34:42.304386 | orchestrator | Friday 29 August 2025 14:34:37 +0000 (0:00:01.057) 0:03:37.179 ********* 2025-08-29 14:34:42.304394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:34:42.304403 | orchestrator | 2025-08-29 14:34:42.304409 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 14:34:42.304416 | orchestrator | Friday 29 August 2025 14:34:37 +0000 (0:00:00.421) 0:03:37.601 ********* 2025-08-29 14:34:42.304422 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:42.304428 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:42.304435 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:42.304463 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:42.304470 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:42.304476 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:42.304482 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:42.304488 | orchestrator | 2025-08-29 14:34:42.304495 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 14:34:42.304503 | orchestrator | Friday 29 August 2025 14:34:39 +0000 (0:00:01.422) 0:03:39.023 ********* 2025-08-29 14:34:42.304509 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:42.304515 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:42.304521 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:42.304528 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:42.304534 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:42.304540 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:42.304546 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:42.304552 | orchestrator | 2025-08-29 14:34:42.304574 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 14:34:42.304580 | orchestrator | Friday 29 August 2025 14:34:39 +0000 (0:00:00.733) 0:03:39.757 ********* 2025-08-29 14:34:42.304586 | orchestrator | changed: [testbed-manager] 2025-08-29 14:34:42.304593 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:34:42.304599 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:34:42.304605 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:34:42.304658 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:34:42.304666 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:34:42.304672 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:34:42.304678 | orchestrator | 2025-08-29 14:34:42.304684 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 14:34:42.304690 | orchestrator | Friday 29 August 2025 14:34:40 +0000 (0:00:00.712) 0:03:40.470 ********* 2025-08-29 14:34:42.304696 | orchestrator | ok: [testbed-manager] 2025-08-29 14:34:42.304702 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:34:42.304709 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:34:42.304715 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:34:42.304721 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:34:42.304727 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:34:42.304733 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:34:42.304739 | orchestrator | 2025-08-29 14:34:42.304746 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 14:34:42.304753 | orchestrator | Friday 29 August 2025 14:34:41 +0000 (0:00:00.626) 0:03:41.097 ********* 2025-08-29 14:34:42.304780 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476595.919766, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304790 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476630.576617, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304797 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476638.3233693, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304811 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476649.976153, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304817 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476622.584062, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304824 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476620.124974, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304831 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476630.6410034, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:34:42.304841 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565045 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565164 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565225 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565239 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565255 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565267 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:07.565279 | orchestrator | 2025-08-29 14:35:07.565293 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 14:35:07.565305 | orchestrator | Friday 29 August 2025 14:34:42 +0000 (0:00:01.160) 0:03:42.257 ********* 2025-08-29 14:35:07.565316 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:07.565328 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:07.565338 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:07.565349 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:07.565359 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:07.565370 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:07.565381 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:07.565391 | orchestrator | 2025-08-29 14:35:07.565402 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 14:35:07.565413 | orchestrator | Friday 29 August 2025 14:34:43 +0000 (0:00:01.228) 0:03:43.485 ********* 2025-08-29 14:35:07.565424 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:07.565435 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:07.565446 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:07.565457 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:07.565485 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:07.565497 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:07.565507 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:07.565518 | orchestrator | 2025-08-29 14:35:07.565528 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 14:35:07.565550 | orchestrator | Friday 29 August 2025 14:34:44 +0000 (0:00:01.270) 0:03:44.755 ********* 2025-08-29 14:35:07.565560 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:07.565571 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:07.565581 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:07.565592 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:07.565655 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:07.565667 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:07.565677 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:07.565688 | orchestrator | 2025-08-29 14:35:07.565699 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 14:35:07.565717 | orchestrator | Friday 29 August 2025 14:34:45 +0000 (0:00:01.198) 0:03:45.954 ********* 2025-08-29 14:35:07.565737 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:07.565757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:07.565775 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:07.565794 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:07.565812 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:07.565830 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:07.565849 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:07.565869 | orchestrator | 2025-08-29 14:35:07.565889 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 14:35:07.565908 | orchestrator | Friday 29 August 2025 14:34:46 +0000 (0:00:00.339) 0:03:46.293 ********* 2025-08-29 14:35:07.565927 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:07.565939 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:07.565950 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:07.565961 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:07.565971 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:07.565982 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:07.565992 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:07.566002 | orchestrator | 2025-08-29 14:35:07.566077 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 14:35:07.566090 | orchestrator | Friday 29 August 2025 14:34:47 +0000 (0:00:00.837) 0:03:47.131 ********* 2025-08-29 14:35:07.566103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:07.566117 | orchestrator | 2025-08-29 14:35:07.566128 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 14:35:07.566138 | orchestrator | Friday 29 August 2025 14:34:47 +0000 (0:00:00.408) 0:03:47.539 ********* 2025-08-29 14:35:07.566149 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:07.566160 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:07.566171 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:07.566181 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:07.566191 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:07.566202 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:07.566213 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:07.566223 | orchestrator | 2025-08-29 14:35:07.566234 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 14:35:07.566245 | orchestrator | Friday 29 August 2025 14:34:55 +0000 (0:00:08.329) 0:03:55.868 ********* 2025-08-29 14:35:07.566255 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:07.566266 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:07.566277 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:07.566287 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:07.566298 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:07.566312 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:07.566330 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:07.566348 | orchestrator | 2025-08-29 14:35:07.566366 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 14:35:07.566398 | orchestrator | Friday 29 August 2025 14:34:57 +0000 (0:00:01.221) 0:03:57.090 ********* 2025-08-29 14:35:07.566419 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:07.566437 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:07.566459 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:07.566469 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:07.566480 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:07.566490 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:07.566500 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:07.566511 | orchestrator | 2025-08-29 14:35:07.566521 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 14:35:07.566532 | orchestrator | Friday 29 August 2025 14:34:58 +0000 (0:00:01.061) 0:03:58.151 ********* 2025-08-29 14:35:07.566543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:07.566554 | orchestrator | 2025-08-29 14:35:07.566565 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 14:35:07.566575 | orchestrator | Friday 29 August 2025 14:34:58 +0000 (0:00:00.472) 0:03:58.623 ********* 2025-08-29 14:35:07.566586 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:07.566624 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:07.566639 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:07.566650 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:07.566660 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:07.566671 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:07.566681 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:07.566691 | orchestrator | 2025-08-29 14:35:07.566702 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 14:35:07.566713 | orchestrator | Friday 29 August 2025 14:35:06 +0000 (0:00:08.287) 0:04:06.911 ********* 2025-08-29 14:35:07.566723 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:07.566734 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:07.566744 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:07.566766 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.025915 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.026014 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.026068 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.026076 | orchestrator | 2025-08-29 14:36:15.026085 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 14:36:15.026094 | orchestrator | Friday 29 August 2025 14:35:07 +0000 (0:00:00.610) 0:04:07.521 ********* 2025-08-29 14:36:15.026101 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:15.026121 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:15.026132 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:15.026143 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.026154 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.026166 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.026177 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.026189 | orchestrator | 2025-08-29 14:36:15.026202 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 14:36:15.026210 | orchestrator | Friday 29 August 2025 14:35:08 +0000 (0:00:01.132) 0:04:08.653 ********* 2025-08-29 14:36:15.026217 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:15.026224 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:15.026231 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:15.026237 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.026244 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.026250 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.026257 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.026264 | orchestrator | 2025-08-29 14:36:15.026270 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 14:36:15.026277 | orchestrator | Friday 29 August 2025 14:35:09 +0000 (0:00:01.050) 0:04:09.704 ********* 2025-08-29 14:36:15.026308 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:15.026316 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:15.026322 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:15.026329 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:15.026335 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:15.026342 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:15.026348 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:15.026355 | orchestrator | 2025-08-29 14:36:15.026361 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 14:36:15.026369 | orchestrator | Friday 29 August 2025 14:35:10 +0000 (0:00:00.333) 0:04:10.037 ********* 2025-08-29 14:36:15.026375 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:15.026382 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:15.026390 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:15.026401 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:15.026418 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:15.026430 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:15.026441 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:15.026451 | orchestrator | 2025-08-29 14:36:15.026462 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 14:36:15.026473 | orchestrator | Friday 29 August 2025 14:35:10 +0000 (0:00:00.352) 0:04:10.390 ********* 2025-08-29 14:36:15.026485 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:15.026495 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:15.026505 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:15.026514 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:15.026525 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:15.026537 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:15.026548 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:15.026559 | orchestrator | 2025-08-29 14:36:15.026618 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 14:36:15.026627 | orchestrator | Friday 29 August 2025 14:35:10 +0000 (0:00:00.280) 0:04:10.671 ********* 2025-08-29 14:36:15.026635 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:15.026642 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:15.026650 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:15.026657 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:15.026663 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:15.026670 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:15.026676 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:15.026682 | orchestrator | 2025-08-29 14:36:15.026689 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 14:36:15.026695 | orchestrator | Friday 29 August 2025 14:35:16 +0000 (0:00:05.878) 0:04:16.550 ********* 2025-08-29 14:36:15.026716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:15.026725 | orchestrator | 2025-08-29 14:36:15.026732 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 14:36:15.026739 | orchestrator | Friday 29 August 2025 14:35:16 +0000 (0:00:00.412) 0:04:16.962 ********* 2025-08-29 14:36:15.026746 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026752 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 14:36:15.026759 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026765 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:15.026772 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 14:36:15.026778 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026785 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 14:36:15.026791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:15.026798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:15.026813 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026819 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 14:36:15.026826 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026832 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 14:36:15.026839 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:15.026845 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026852 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 14:36:15.026858 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:15.026880 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:15.026887 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 14:36:15.026894 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 14:36:15.026902 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:15.026912 | orchestrator | 2025-08-29 14:36:15.026924 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 14:36:15.026934 | orchestrator | Friday 29 August 2025 14:35:17 +0000 (0:00:00.335) 0:04:17.298 ********* 2025-08-29 14:36:15.026950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:15.026962 | orchestrator | 2025-08-29 14:36:15.026972 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 14:36:15.026983 | orchestrator | Friday 29 August 2025 14:35:17 +0000 (0:00:00.417) 0:04:17.715 ********* 2025-08-29 14:36:15.026993 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 14:36:15.027003 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 14:36:15.027013 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:15.027021 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 14:36:15.027031 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:15.027040 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:15.027050 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 14:36:15.027059 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 14:36:15.027070 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:15.027080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:15.027090 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 14:36:15.027102 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:15.027113 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 14:36:15.027124 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:15.027135 | orchestrator | 2025-08-29 14:36:15.027151 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 14:36:15.027164 | orchestrator | Friday 29 August 2025 14:35:18 +0000 (0:00:00.310) 0:04:18.025 ********* 2025-08-29 14:36:15.027174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:15.027185 | orchestrator | 2025-08-29 14:36:15.027196 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 14:36:15.027208 | orchestrator | Friday 29 August 2025 14:35:18 +0000 (0:00:00.550) 0:04:18.576 ********* 2025-08-29 14:36:15.027219 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:15.027230 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.027242 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:15.027250 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.027256 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:15.027262 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.027276 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.027283 | orchestrator | 2025-08-29 14:36:15.027290 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 14:36:15.027296 | orchestrator | Friday 29 August 2025 14:35:52 +0000 (0:00:34.160) 0:04:52.736 ********* 2025-08-29 14:36:15.027303 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:15.027309 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:15.027316 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.027322 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.027329 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:15.027335 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.027342 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.027348 | orchestrator | 2025-08-29 14:36:15.027355 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 14:36:15.027362 | orchestrator | Friday 29 August 2025 14:36:00 +0000 (0:00:07.634) 0:05:00.370 ********* 2025-08-29 14:36:15.027369 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.027375 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:15.027382 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:15.027388 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:15.027395 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.027401 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.027408 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.027414 | orchestrator | 2025-08-29 14:36:15.027421 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 14:36:15.027428 | orchestrator | Friday 29 August 2025 14:36:07 +0000 (0:00:07.220) 0:05:07.591 ********* 2025-08-29 14:36:15.027434 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:15.027441 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:15.027447 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:15.027454 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:15.027460 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:15.027467 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:15.027473 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:15.027479 | orchestrator | 2025-08-29 14:36:15.027486 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 14:36:15.027494 | orchestrator | Friday 29 August 2025 14:36:09 +0000 (0:00:01.596) 0:05:09.188 ********* 2025-08-29 14:36:15.027500 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:15.027506 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:15.027513 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:15.027520 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:15.027526 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:15.027533 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:15.027540 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:15.027546 | orchestrator | 2025-08-29 14:36:15.027553 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 14:36:15.027589 | orchestrator | Friday 29 August 2025 14:36:15 +0000 (0:00:05.783) 0:05:14.972 ********* 2025-08-29 14:36:26.671203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:26.671317 | orchestrator | 2025-08-29 14:36:26.671333 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 14:36:26.671345 | orchestrator | Friday 29 August 2025 14:36:15 +0000 (0:00:00.444) 0:05:15.416 ********* 2025-08-29 14:36:26.671355 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:26.671365 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:26.671374 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:26.671384 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:26.671394 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:26.671403 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:26.671412 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:26.671443 | orchestrator | 2025-08-29 14:36:26.671453 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 14:36:26.671463 | orchestrator | Friday 29 August 2025 14:36:16 +0000 (0:00:00.807) 0:05:16.224 ********* 2025-08-29 14:36:26.671473 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:26.671483 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:26.671493 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:26.671502 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:26.671511 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:26.671521 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:26.671530 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:26.671540 | orchestrator | 2025-08-29 14:36:26.671549 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 14:36:26.671587 | orchestrator | Friday 29 August 2025 14:36:17 +0000 (0:00:01.724) 0:05:17.948 ********* 2025-08-29 14:36:26.671597 | orchestrator | changed: [testbed-manager] 2025-08-29 14:36:26.671606 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:36:26.671616 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:36:26.671625 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:36:26.671635 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:36:26.671644 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:36:26.671653 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:36:26.671663 | orchestrator | 2025-08-29 14:36:26.671672 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 14:36:26.671682 | orchestrator | Friday 29 August 2025 14:36:18 +0000 (0:00:00.830) 0:05:18.779 ********* 2025-08-29 14:36:26.671691 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:26.671700 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:26.671709 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:26.671719 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:26.671728 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:26.671739 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:26.671749 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:26.671760 | orchestrator | 2025-08-29 14:36:26.671770 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 14:36:26.671799 | orchestrator | Friday 29 August 2025 14:36:19 +0000 (0:00:00.281) 0:05:19.060 ********* 2025-08-29 14:36:26.671810 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:26.671821 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:26.671832 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:26.671843 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:26.671853 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:26.671863 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:26.671873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:26.671884 | orchestrator | 2025-08-29 14:36:26.671895 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 14:36:26.671905 | orchestrator | Friday 29 August 2025 14:36:19 +0000 (0:00:00.411) 0:05:19.472 ********* 2025-08-29 14:36:26.671916 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:26.671926 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:26.671936 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:26.671947 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:26.671957 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:26.671967 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:26.671977 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:26.671988 | orchestrator | 2025-08-29 14:36:26.672003 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 14:36:26.672014 | orchestrator | Friday 29 August 2025 14:36:19 +0000 (0:00:00.296) 0:05:19.769 ********* 2025-08-29 14:36:26.672025 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:26.672036 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:26.672046 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:26.672057 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:26.672067 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:26.672086 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:26.672096 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:26.672106 | orchestrator | 2025-08-29 14:36:26.672115 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 14:36:26.672126 | orchestrator | Friday 29 August 2025 14:36:20 +0000 (0:00:00.299) 0:05:20.068 ********* 2025-08-29 14:36:26.672135 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:26.672145 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:26.672154 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:26.672164 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:26.672173 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:26.672183 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:26.672192 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:26.672202 | orchestrator | 2025-08-29 14:36:26.672211 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 14:36:26.672221 | orchestrator | Friday 29 August 2025 14:36:20 +0000 (0:00:00.352) 0:05:20.421 ********* 2025-08-29 14:36:26.672230 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:36:26.672240 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672249 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:36:26.672259 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672268 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:36:26.672277 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672287 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:36:26.672296 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672306 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:36:26.672316 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672341 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:36:26.672351 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672361 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:36:26.672370 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:36:26.672380 | orchestrator | 2025-08-29 14:36:26.672389 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 14:36:26.672399 | orchestrator | Friday 29 August 2025 14:36:20 +0000 (0:00:00.304) 0:05:20.725 ********* 2025-08-29 14:36:26.672408 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:36:26.672418 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672427 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:36:26.672436 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672446 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:36:26.672455 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672465 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:36:26.672474 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672483 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:36:26.672493 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672502 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:36:26.672511 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672521 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:36:26.672530 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:36:26.672539 | orchestrator | 2025-08-29 14:36:26.672549 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 14:36:26.672575 | orchestrator | Friday 29 August 2025 14:36:21 +0000 (0:00:00.425) 0:05:21.150 ********* 2025-08-29 14:36:26.672585 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:26.672594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:26.672604 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:26.672613 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:26.672622 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:26.672632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:26.672641 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:26.672650 | orchestrator | 2025-08-29 14:36:26.672660 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 14:36:26.672669 | orchestrator | Friday 29 August 2025 14:36:21 +0000 (0:00:00.284) 0:05:21.435 ********* 2025-08-29 14:36:26.672683 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:26.672693 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:26.672702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:26.672712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:26.672721 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:36:26.672731 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:36:26.672740 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:36:26.672749 | orchestrator | 2025-08-29 14:36:26.672759 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 14:36:26.672768 | orchestrator | Friday 29 August 2025 14:36:21 +0000 (0:00:00.310) 0:05:21.746 ********* 2025-08-29 14:36:26.672779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:36:26.672791 | orchestrator | 2025-08-29 14:36:26.672800 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 14:36:26.672810 | orchestrator | Friday 29 August 2025 14:36:22 +0000 (0:00:00.397) 0:05:22.143 ********* 2025-08-29 14:36:26.672819 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:26.672829 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:26.672838 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:26.672848 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:26.672857 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:26.672866 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:26.672876 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:26.672885 | orchestrator | 2025-08-29 14:36:26.672895 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 14:36:26.672904 | orchestrator | Friday 29 August 2025 14:36:23 +0000 (0:00:00.916) 0:05:23.059 ********* 2025-08-29 14:36:26.672913 | orchestrator | ok: [testbed-manager] 2025-08-29 14:36:26.672923 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:36:26.672936 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:36:26.672946 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:36:26.672955 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:36:26.672965 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:36:26.672974 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:36:26.672983 | orchestrator | 2025-08-29 14:36:26.672993 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 14:36:26.673003 | orchestrator | Friday 29 August 2025 14:36:26 +0000 (0:00:02.942) 0:05:26.002 ********* 2025-08-29 14:36:26.673013 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 14:36:26.673023 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 14:36:26.673032 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 14:36:26.673042 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 14:36:26.673051 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 14:36:26.673061 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 14:36:26.673070 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:36:26.673079 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 14:36:26.673089 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 14:36:26.673098 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 14:36:26.673124 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:36:26.673134 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 14:36:26.673153 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 14:36:26.673162 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 14:36:26.673172 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:36:26.673181 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 14:36:26.673197 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 14:36:26.673207 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:36:26.673222 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 14:37:26.159102 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 14:37:26.159204 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 14:37:26.159211 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 14:37:26.159216 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:26.159221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:26.159225 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 14:37:26.159229 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 14:37:26.159233 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 14:37:26.159237 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:26.159241 | orchestrator | 2025-08-29 14:37:26.159247 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 14:37:26.159252 | orchestrator | Friday 29 August 2025 14:36:26 +0000 (0:00:00.785) 0:05:26.788 ********* 2025-08-29 14:37:26.159256 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159260 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159264 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159268 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159272 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159275 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159279 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159283 | orchestrator | 2025-08-29 14:37:26.159287 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 14:37:26.159291 | orchestrator | Friday 29 August 2025 14:36:33 +0000 (0:00:06.444) 0:05:33.233 ********* 2025-08-29 14:37:26.159295 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159299 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159303 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159306 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159310 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159314 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159318 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159322 | orchestrator | 2025-08-29 14:37:26.159325 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 14:37:26.159329 | orchestrator | Friday 29 August 2025 14:36:34 +0000 (0:00:01.110) 0:05:34.343 ********* 2025-08-29 14:37:26.159333 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159337 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159341 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159345 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159348 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159352 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159356 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159360 | orchestrator | 2025-08-29 14:37:26.159363 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 14:37:26.159367 | orchestrator | Friday 29 August 2025 14:36:42 +0000 (0:00:07.627) 0:05:41.970 ********* 2025-08-29 14:37:26.159371 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:26.159375 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159379 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159382 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159386 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159390 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159394 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159397 | orchestrator | 2025-08-29 14:37:26.159401 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 14:37:26.159405 | orchestrator | Friday 29 August 2025 14:36:45 +0000 (0:00:03.270) 0:05:45.241 ********* 2025-08-29 14:37:26.159409 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159431 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159435 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159438 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159442 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159446 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159450 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159454 | orchestrator | 2025-08-29 14:37:26.159457 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 14:37:26.159471 | orchestrator | Friday 29 August 2025 14:36:46 +0000 (0:00:01.606) 0:05:46.847 ********* 2025-08-29 14:37:26.159475 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159479 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159483 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159487 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159491 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159494 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159498 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159502 | orchestrator | 2025-08-29 14:37:26.159506 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 14:37:26.159509 | orchestrator | Friday 29 August 2025 14:36:48 +0000 (0:00:01.319) 0:05:48.167 ********* 2025-08-29 14:37:26.159513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:26.159517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:26.159520 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:26.159567 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:26.159571 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:26.159575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:26.159579 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:26.159582 | orchestrator | 2025-08-29 14:37:26.159586 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 14:37:26.159590 | orchestrator | Friday 29 August 2025 14:36:48 +0000 (0:00:00.616) 0:05:48.784 ********* 2025-08-29 14:37:26.159594 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159597 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159601 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159605 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159608 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159612 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159616 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159619 | orchestrator | 2025-08-29 14:37:26.159623 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 14:37:26.159627 | orchestrator | Friday 29 August 2025 14:36:58 +0000 (0:00:09.692) 0:05:58.477 ********* 2025-08-29 14:37:26.159631 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:26.159634 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159648 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159652 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159656 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159660 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159664 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159668 | orchestrator | 2025-08-29 14:37:26.159673 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 14:37:26.159677 | orchestrator | Friday 29 August 2025 14:36:59 +0000 (0:00:00.938) 0:05:59.416 ********* 2025-08-29 14:37:26.159681 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159685 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159690 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159694 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159698 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159702 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159706 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159710 | orchestrator | 2025-08-29 14:37:26.159715 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 14:37:26.159723 | orchestrator | Friday 29 August 2025 14:37:08 +0000 (0:00:09.117) 0:06:08.533 ********* 2025-08-29 14:37:26.159727 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159732 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159736 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159740 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159744 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159748 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159753 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159757 | orchestrator | 2025-08-29 14:37:26.159761 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 14:37:26.159765 | orchestrator | Friday 29 August 2025 14:37:19 +0000 (0:00:10.994) 0:06:19.528 ********* 2025-08-29 14:37:26.159770 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 14:37:26.159774 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 14:37:26.159778 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 14:37:26.159782 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 14:37:26.159787 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 14:37:26.159791 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 14:37:26.159795 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 14:37:26.159799 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 14:37:26.159803 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 14:37:26.159807 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 14:37:26.159811 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 14:37:26.159815 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 14:37:26.159819 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 14:37:26.159824 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 14:37:26.159828 | orchestrator | 2025-08-29 14:37:26.159832 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 14:37:26.159836 | orchestrator | Friday 29 August 2025 14:37:20 +0000 (0:00:01.180) 0:06:20.709 ********* 2025-08-29 14:37:26.159840 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:26.159845 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:26.159849 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:26.159853 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:26.159857 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:26.159861 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:26.159865 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:26.159869 | orchestrator | 2025-08-29 14:37:26.159874 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 14:37:26.159878 | orchestrator | Friday 29 August 2025 14:37:21 +0000 (0:00:00.535) 0:06:21.244 ********* 2025-08-29 14:37:26.159882 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:26.159886 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:26.159890 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:26.159897 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:26.159902 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:26.159906 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:26.159910 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:26.159914 | orchestrator | 2025-08-29 14:37:26.159918 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 14:37:26.159924 | orchestrator | Friday 29 August 2025 14:37:25 +0000 (0:00:03.917) 0:06:25.162 ********* 2025-08-29 14:37:26.159928 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:26.159932 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:26.159936 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:26.159940 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:26.159945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:26.159953 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:26.159957 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:26.159961 | orchestrator | 2025-08-29 14:37:26.159966 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 14:37:26.159970 | orchestrator | Friday 29 August 2025 14:37:25 +0000 (0:00:00.559) 0:06:25.721 ********* 2025-08-29 14:37:26.159975 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 14:37:26.159979 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 14:37:26.159983 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:26.159987 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 14:37:26.159991 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 14:37:26.159996 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:26.160000 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 14:37:26.160004 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 14:37:26.160009 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:26.160013 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 14:37:26.160017 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 14:37:26.160024 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:45.896416 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 14:37:45.896509 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 14:37:45.896544 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:45.896553 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 14:37:45.896559 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 14:37:45.896565 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:45.896572 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 14:37:45.896578 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 14:37:45.896583 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:45.896589 | orchestrator | 2025-08-29 14:37:45.896597 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 14:37:45.896605 | orchestrator | Friday 29 August 2025 14:37:26 +0000 (0:00:00.604) 0:06:26.326 ********* 2025-08-29 14:37:45.896610 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:45.896616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:45.896622 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:45.896628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:45.896633 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:45.896639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:45.896645 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:45.896650 | orchestrator | 2025-08-29 14:37:45.896656 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 14:37:45.896663 | orchestrator | Friday 29 August 2025 14:37:26 +0000 (0:00:00.507) 0:06:26.833 ********* 2025-08-29 14:37:45.896669 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:45.896674 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:45.896680 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:45.896685 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:45.896691 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:45.896697 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:45.896702 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:45.896708 | orchestrator | 2025-08-29 14:37:45.896714 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 14:37:45.896719 | orchestrator | Friday 29 August 2025 14:37:27 +0000 (0:00:00.560) 0:06:27.394 ********* 2025-08-29 14:37:45.896725 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:45.896731 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:45.896736 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:45.896758 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:45.896764 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:45.896770 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:45.896775 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:45.896781 | orchestrator | 2025-08-29 14:37:45.896786 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 14:37:45.896792 | orchestrator | Friday 29 August 2025 14:37:28 +0000 (0:00:00.721) 0:06:28.115 ********* 2025-08-29 14:37:45.896798 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.896803 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:45.896809 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:45.896815 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:45.896820 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:45.896826 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:45.896831 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:45.896837 | orchestrator | 2025-08-29 14:37:45.896843 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 14:37:45.896849 | orchestrator | Friday 29 August 2025 14:37:29 +0000 (0:00:01.689) 0:06:29.805 ********* 2025-08-29 14:37:45.896855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:45.896862 | orchestrator | 2025-08-29 14:37:45.896869 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 14:37:45.896874 | orchestrator | Friday 29 August 2025 14:37:30 +0000 (0:00:00.907) 0:06:30.713 ********* 2025-08-29 14:37:45.896880 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.896886 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:45.896891 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:45.896897 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:45.896902 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:45.896908 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:45.896914 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:45.896919 | orchestrator | 2025-08-29 14:37:45.896925 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 14:37:45.896930 | orchestrator | Friday 29 August 2025 14:37:31 +0000 (0:00:01.023) 0:06:31.737 ********* 2025-08-29 14:37:45.896936 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.896942 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:45.896947 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:45.896954 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:45.896961 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:45.896967 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:45.896973 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:45.896980 | orchestrator | 2025-08-29 14:37:45.896986 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 14:37:45.896992 | orchestrator | Friday 29 August 2025 14:37:32 +0000 (0:00:01.131) 0:06:32.868 ********* 2025-08-29 14:37:45.896998 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.897005 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:45.897011 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:45.897017 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:45.897024 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:45.897030 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:45.897036 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:45.897042 | orchestrator | 2025-08-29 14:37:45.897049 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 14:37:45.897055 | orchestrator | Friday 29 August 2025 14:37:34 +0000 (0:00:01.329) 0:06:34.198 ********* 2025-08-29 14:37:45.897061 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:45.897081 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:45.897087 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:45.897094 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:45.897106 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:45.897112 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:45.897119 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:45.897125 | orchestrator | 2025-08-29 14:37:45.897131 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 14:37:45.897137 | orchestrator | Friday 29 August 2025 14:37:35 +0000 (0:00:01.426) 0:06:35.625 ********* 2025-08-29 14:37:45.897144 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.897150 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:45.897157 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:45.897177 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:45.897184 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:45.897190 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:45.897197 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:45.897203 | orchestrator | 2025-08-29 14:37:45.897210 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 14:37:45.897217 | orchestrator | Friday 29 August 2025 14:37:36 +0000 (0:00:01.296) 0:06:36.921 ********* 2025-08-29 14:37:45.897223 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:45.897229 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:45.897235 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:45.897242 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:45.897248 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:45.897255 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:45.897261 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:45.897268 | orchestrator | 2025-08-29 14:37:45.897274 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 14:37:45.897281 | orchestrator | Friday 29 August 2025 14:37:38 +0000 (0:00:01.600) 0:06:38.522 ********* 2025-08-29 14:37:45.897288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:45.897294 | orchestrator | 2025-08-29 14:37:45.897301 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 14:37:45.897307 | orchestrator | Friday 29 August 2025 14:37:39 +0000 (0:00:00.893) 0:06:39.415 ********* 2025-08-29 14:37:45.897314 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.897320 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:45.897327 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:45.897333 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:45.897340 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:45.897345 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:45.897351 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:45.897357 | orchestrator | 2025-08-29 14:37:45.897362 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 14:37:45.897368 | orchestrator | Friday 29 August 2025 14:37:40 +0000 (0:00:01.492) 0:06:40.907 ********* 2025-08-29 14:37:45.897374 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.897380 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:45.897385 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:45.897391 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:45.897396 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:45.897402 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:45.897407 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:45.897413 | orchestrator | 2025-08-29 14:37:45.897419 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 14:37:45.897424 | orchestrator | Friday 29 August 2025 14:37:42 +0000 (0:00:01.183) 0:06:42.090 ********* 2025-08-29 14:37:45.897430 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.897436 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:45.897442 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:45.897447 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:45.897453 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:45.897458 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:45.897468 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:45.897474 | orchestrator | 2025-08-29 14:37:45.897480 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 14:37:45.897489 | orchestrator | Friday 29 August 2025 14:37:43 +0000 (0:00:01.365) 0:06:43.456 ********* 2025-08-29 14:37:45.897495 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:45.897500 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:45.897506 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:45.897533 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:45.897542 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:45.897550 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:45.897558 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:45.897566 | orchestrator | 2025-08-29 14:37:45.897575 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 14:37:45.897585 | orchestrator | Friday 29 August 2025 14:37:44 +0000 (0:00:01.224) 0:06:44.681 ********* 2025-08-29 14:37:45.897594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:45.897603 | orchestrator | 2025-08-29 14:37:45.897612 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:37:45.897622 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.873) 0:06:45.554 ********* 2025-08-29 14:37:45.897628 | orchestrator | 2025-08-29 14:37:45.897634 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:37:45.897640 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.038) 0:06:45.593 ********* 2025-08-29 14:37:45.897645 | orchestrator | 2025-08-29 14:37:45.897651 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:37:45.897657 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.045) 0:06:45.639 ********* 2025-08-29 14:37:45.897662 | orchestrator | 2025-08-29 14:37:45.897668 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:37:45.897674 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.038) 0:06:45.678 ********* 2025-08-29 14:37:45.897679 | orchestrator | 2025-08-29 14:37:45.897685 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:37:45.897695 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.039) 0:06:45.717 ********* 2025-08-29 14:38:12.794689 | orchestrator | 2025-08-29 14:38:12.794813 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:12.794832 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.045) 0:06:45.763 ********* 2025-08-29 14:38:12.794844 | orchestrator | 2025-08-29 14:38:12.794856 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:12.794871 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.039) 0:06:45.802 ********* 2025-08-29 14:38:12.794889 | orchestrator | 2025-08-29 14:38:12.794907 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:38:12.794924 | orchestrator | Friday 29 August 2025 14:37:45 +0000 (0:00:00.040) 0:06:45.843 ********* 2025-08-29 14:38:12.794942 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:12.794959 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:12.794977 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:12.794998 | orchestrator | 2025-08-29 14:38:12.795018 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 14:38:12.795037 | orchestrator | Friday 29 August 2025 14:37:47 +0000 (0:00:01.363) 0:06:47.207 ********* 2025-08-29 14:38:12.795057 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:12.795080 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:12.795095 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:12.795106 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:12.795117 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:12.795128 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:12.795139 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:12.795183 | orchestrator | 2025-08-29 14:38:12.795196 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 14:38:12.795209 | orchestrator | Friday 29 August 2025 14:37:48 +0000 (0:00:01.355) 0:06:48.563 ********* 2025-08-29 14:38:12.795221 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:12.795235 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:12.795255 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:12.795273 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:12.795292 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:12.795308 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:12.795326 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:12.795345 | orchestrator | 2025-08-29 14:38:12.795363 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 14:38:12.795381 | orchestrator | Friday 29 August 2025 14:37:49 +0000 (0:00:01.195) 0:06:49.758 ********* 2025-08-29 14:38:12.795400 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:12.795418 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:12.795436 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:12.795455 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:12.795474 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:12.795492 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:12.795539 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:12.795557 | orchestrator | 2025-08-29 14:38:12.795575 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 14:38:12.795594 | orchestrator | Friday 29 August 2025 14:37:52 +0000 (0:00:02.436) 0:06:52.194 ********* 2025-08-29 14:38:12.795615 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:12.795633 | orchestrator | 2025-08-29 14:38:12.795652 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 14:38:12.795665 | orchestrator | Friday 29 August 2025 14:37:52 +0000 (0:00:00.111) 0:06:52.306 ********* 2025-08-29 14:38:12.795676 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:12.795687 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:12.795697 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:12.795708 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:12.795718 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:12.795729 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:12.795739 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:12.795750 | orchestrator | 2025-08-29 14:38:12.795761 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 14:38:12.795799 | orchestrator | Friday 29 August 2025 14:37:53 +0000 (0:00:00.977) 0:06:53.284 ********* 2025-08-29 14:38:12.795817 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:12.795836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:12.795855 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:12.795874 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:12.795892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:12.795911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:12.795922 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:12.795932 | orchestrator | 2025-08-29 14:38:12.795943 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 14:38:12.795954 | orchestrator | Friday 29 August 2025 14:37:54 +0000 (0:00:00.702) 0:06:53.986 ********* 2025-08-29 14:38:12.795965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:12.795978 | orchestrator | 2025-08-29 14:38:12.795989 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 14:38:12.796000 | orchestrator | Friday 29 August 2025 14:37:54 +0000 (0:00:00.906) 0:06:54.893 ********* 2025-08-29 14:38:12.796010 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:12.796021 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:12.796044 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:12.796055 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:12.796066 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:12.796076 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:12.796086 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:12.796097 | orchestrator | 2025-08-29 14:38:12.796107 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 14:38:12.796118 | orchestrator | Friday 29 August 2025 14:37:55 +0000 (0:00:00.849) 0:06:55.743 ********* 2025-08-29 14:38:12.796129 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 14:38:12.796140 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 14:38:12.796151 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 14:38:12.796184 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 14:38:12.796195 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 14:38:12.796206 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 14:38:12.796217 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 14:38:12.796227 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 14:38:12.796238 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 14:38:12.796248 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 14:38:12.796259 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 14:38:12.796270 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 14:38:12.796280 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 14:38:12.796291 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 14:38:12.796301 | orchestrator | 2025-08-29 14:38:12.796312 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 14:38:12.796323 | orchestrator | Friday 29 August 2025 14:37:58 +0000 (0:00:02.634) 0:06:58.377 ********* 2025-08-29 14:38:12.796334 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:12.796344 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:12.796355 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:12.796366 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:12.796376 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:12.796387 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:12.796414 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:12.796425 | orchestrator | 2025-08-29 14:38:12.796435 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 14:38:12.796446 | orchestrator | Friday 29 August 2025 14:37:58 +0000 (0:00:00.526) 0:06:58.903 ********* 2025-08-29 14:38:12.796459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:12.796471 | orchestrator | 2025-08-29 14:38:12.796482 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 14:38:12.796493 | orchestrator | Friday 29 August 2025 14:37:59 +0000 (0:00:00.814) 0:06:59.718 ********* 2025-08-29 14:38:12.796559 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:12.796570 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:12.796581 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:12.796591 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:12.796602 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:12.796612 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:12.796623 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:12.796633 | orchestrator | 2025-08-29 14:38:12.796644 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 14:38:12.796655 | orchestrator | Friday 29 August 2025 14:38:00 +0000 (0:00:01.082) 0:07:00.801 ********* 2025-08-29 14:38:12.796665 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:12.796684 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:12.796694 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:12.796705 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:12.796715 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:12.796726 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:12.796736 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:12.796746 | orchestrator | 2025-08-29 14:38:12.796757 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 14:38:12.796768 | orchestrator | Friday 29 August 2025 14:38:01 +0000 (0:00:00.865) 0:07:01.666 ********* 2025-08-29 14:38:12.796778 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:12.796789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:12.796800 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:12.796810 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:12.796827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:12.796838 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:12.796849 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:12.796859 | orchestrator | 2025-08-29 14:38:12.796870 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 14:38:12.796880 | orchestrator | Friday 29 August 2025 14:38:02 +0000 (0:00:00.617) 0:07:02.284 ********* 2025-08-29 14:38:12.796891 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:12.796902 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:12.796912 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:12.796923 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:12.796933 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:12.796944 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:12.796954 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:12.796965 | orchestrator | 2025-08-29 14:38:12.796975 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 14:38:12.796986 | orchestrator | Friday 29 August 2025 14:38:03 +0000 (0:00:01.491) 0:07:03.776 ********* 2025-08-29 14:38:12.796996 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:12.797007 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:12.797018 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:12.797028 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:12.797038 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:12.797049 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:12.797059 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:12.797070 | orchestrator | 2025-08-29 14:38:12.797080 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 14:38:12.797096 | orchestrator | Friday 29 August 2025 14:38:04 +0000 (0:00:00.517) 0:07:04.294 ********* 2025-08-29 14:38:12.797114 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:12.797134 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:12.797153 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:12.797178 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:12.797202 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:12.797221 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:12.797240 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:12.797260 | orchestrator | 2025-08-29 14:38:12.797279 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 14:38:12.797301 | orchestrator | Friday 29 August 2025 14:38:12 +0000 (0:00:08.452) 0:07:12.747 ********* 2025-08-29 14:38:46.745954 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.746130 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:46.746147 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:46.746159 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:46.746170 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:46.746182 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:46.746193 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:46.746205 | orchestrator | 2025-08-29 14:38:46.746218 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 14:38:46.746231 | orchestrator | Friday 29 August 2025 14:38:14 +0000 (0:00:01.335) 0:07:14.083 ********* 2025-08-29 14:38:46.746266 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.746278 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:46.746288 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:46.746299 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:46.746310 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:46.746321 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:46.746332 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:46.746343 | orchestrator | 2025-08-29 14:38:46.746354 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 14:38:46.746365 | orchestrator | Friday 29 August 2025 14:38:15 +0000 (0:00:01.709) 0:07:15.792 ********* 2025-08-29 14:38:46.746376 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.746387 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:46.746397 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:46.746408 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:46.746418 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:46.746429 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:46.746440 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:46.746450 | orchestrator | 2025-08-29 14:38:46.746461 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:38:46.746507 | orchestrator | Friday 29 August 2025 14:38:17 +0000 (0:00:01.625) 0:07:17.417 ********* 2025-08-29 14:38:46.746518 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.746529 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.746540 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.746551 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.746561 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.746572 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.746583 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.746594 | orchestrator | 2025-08-29 14:38:46.746605 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:38:46.746616 | orchestrator | Friday 29 August 2025 14:38:18 +0000 (0:00:01.113) 0:07:18.531 ********* 2025-08-29 14:38:46.746627 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:46.746637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:46.746648 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:46.746659 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:46.746669 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:46.746680 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:46.746690 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:46.746701 | orchestrator | 2025-08-29 14:38:46.746712 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 14:38:46.746723 | orchestrator | Friday 29 August 2025 14:38:19 +0000 (0:00:00.839) 0:07:19.370 ********* 2025-08-29 14:38:46.746733 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:46.746744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:46.746754 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:46.746765 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:46.746776 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:46.746786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:46.746797 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:46.746807 | orchestrator | 2025-08-29 14:38:46.746818 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 14:38:46.746829 | orchestrator | Friday 29 August 2025 14:38:19 +0000 (0:00:00.526) 0:07:19.897 ********* 2025-08-29 14:38:46.746839 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.746850 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.746861 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.746888 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.746900 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.746910 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.746921 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.746931 | orchestrator | 2025-08-29 14:38:46.746942 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 14:38:46.746961 | orchestrator | Friday 29 August 2025 14:38:20 +0000 (0:00:00.697) 0:07:20.595 ********* 2025-08-29 14:38:46.746972 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.746983 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.746993 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.747004 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.747014 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.747025 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.747035 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.747046 | orchestrator | 2025-08-29 14:38:46.747057 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 14:38:46.747068 | orchestrator | Friday 29 August 2025 14:38:21 +0000 (0:00:00.529) 0:07:21.124 ********* 2025-08-29 14:38:46.747078 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.747089 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.747100 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.747110 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.747121 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.747132 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.747142 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.747153 | orchestrator | 2025-08-29 14:38:46.747164 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 14:38:46.747175 | orchestrator | Friday 29 August 2025 14:38:21 +0000 (0:00:00.534) 0:07:21.658 ********* 2025-08-29 14:38:46.747185 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.747196 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.747207 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.747217 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.747228 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.747238 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.747249 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.747260 | orchestrator | 2025-08-29 14:38:46.747271 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 14:38:46.747282 | orchestrator | Friday 29 August 2025 14:38:27 +0000 (0:00:05.781) 0:07:27.440 ********* 2025-08-29 14:38:46.747293 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:46.747323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:46.747334 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:46.747345 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:46.747356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:46.747366 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:46.747377 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:46.747387 | orchestrator | 2025-08-29 14:38:46.747398 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 14:38:46.747409 | orchestrator | Friday 29 August 2025 14:38:27 +0000 (0:00:00.527) 0:07:27.968 ********* 2025-08-29 14:38:46.747422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:46.747436 | orchestrator | 2025-08-29 14:38:46.747447 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 14:38:46.747458 | orchestrator | Friday 29 August 2025 14:38:29 +0000 (0:00:01.014) 0:07:28.982 ********* 2025-08-29 14:38:46.747487 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.747498 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.747509 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.747520 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.747530 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.747541 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.747551 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.747562 | orchestrator | 2025-08-29 14:38:46.747573 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 14:38:46.747584 | orchestrator | Friday 29 August 2025 14:38:30 +0000 (0:00:01.840) 0:07:30.822 ********* 2025-08-29 14:38:46.747603 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.747614 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.747625 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.747635 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.747646 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.747656 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.747667 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.747677 | orchestrator | 2025-08-29 14:38:46.747688 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 14:38:46.747699 | orchestrator | Friday 29 August 2025 14:38:31 +0000 (0:00:01.133) 0:07:31.956 ********* 2025-08-29 14:38:46.747710 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.747721 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.747731 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:46.747742 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:46.747752 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:46.747763 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:46.747777 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:46.747795 | orchestrator | 2025-08-29 14:38:46.747814 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 14:38:46.747832 | orchestrator | Friday 29 August 2025 14:38:33 +0000 (0:00:01.087) 0:07:33.044 ********* 2025-08-29 14:38:46.747850 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747870 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747890 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747908 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747927 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747942 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747953 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:38:46.747963 | orchestrator | 2025-08-29 14:38:46.747974 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 14:38:46.747985 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:01.716) 0:07:34.760 ********* 2025-08-29 14:38:46.747996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:46.748007 | orchestrator | 2025-08-29 14:38:46.748018 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 14:38:46.748029 | orchestrator | Friday 29 August 2025 14:38:35 +0000 (0:00:00.846) 0:07:35.607 ********* 2025-08-29 14:38:46.748039 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:46.748050 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:46.748060 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:46.748071 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:46.748081 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:46.748092 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:46.748102 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:46.748112 | orchestrator | 2025-08-29 14:38:46.748123 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 14:38:46.748134 | orchestrator | Friday 29 August 2025 14:38:44 +0000 (0:00:09.299) 0:07:44.906 ********* 2025-08-29 14:38:46.748152 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:46.748163 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:46.748181 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:01.117314 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:01.117451 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:01.117523 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:01.117544 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:01.117564 | orchestrator | 2025-08-29 14:39:01.117586 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 14:39:01.117607 | orchestrator | Friday 29 August 2025 14:38:46 +0000 (0:00:01.797) 0:07:46.703 ********* 2025-08-29 14:39:01.117627 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:01.117645 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:01.117664 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:01.117683 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:01.117702 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:01.117721 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:01.117739 | orchestrator | 2025-08-29 14:39:01.117761 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 14:39:01.117782 | orchestrator | Friday 29 August 2025 14:38:48 +0000 (0:00:01.309) 0:07:48.013 ********* 2025-08-29 14:39:01.117803 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:01.117825 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:01.117845 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:01.117866 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:01.117887 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:01.117907 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:01.117928 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:01.117949 | orchestrator | 2025-08-29 14:39:01.117970 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 14:39:01.117990 | orchestrator | 2025-08-29 14:39:01.118012 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 14:39:01.118109 | orchestrator | Friday 29 August 2025 14:38:49 +0000 (0:00:01.490) 0:07:49.503 ********* 2025-08-29 14:39:01.118200 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:01.118221 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:01.118243 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:01.118264 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:01.118284 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:01.118305 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:01.118326 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:01.118347 | orchestrator | 2025-08-29 14:39:01.118368 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 14:39:01.118389 | orchestrator | 2025-08-29 14:39:01.118410 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 14:39:01.118431 | orchestrator | Friday 29 August 2025 14:38:50 +0000 (0:00:00.543) 0:07:50.047 ********* 2025-08-29 14:39:01.118451 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:01.118490 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:01.118508 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:01.118526 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:01.118545 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:01.118564 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:01.118583 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:01.118602 | orchestrator | 2025-08-29 14:39:01.118621 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 14:39:01.118641 | orchestrator | Friday 29 August 2025 14:38:51 +0000 (0:00:01.391) 0:07:51.439 ********* 2025-08-29 14:39:01.118660 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:01.118678 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:01.118695 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:01.118711 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:01.118728 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:01.118744 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:01.118791 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:01.118808 | orchestrator | 2025-08-29 14:39:01.118824 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 14:39:01.118841 | orchestrator | Friday 29 August 2025 14:38:52 +0000 (0:00:01.417) 0:07:52.857 ********* 2025-08-29 14:39:01.118858 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:01.118874 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:01.118897 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:01.118913 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:01.118930 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:01.118947 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:01.118963 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:01.118979 | orchestrator | 2025-08-29 14:39:01.118996 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 14:39:01.119013 | orchestrator | Friday 29 August 2025 14:38:53 +0000 (0:00:00.996) 0:07:53.853 ********* 2025-08-29 14:39:01.119030 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:01.119046 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:01.119062 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:01.119079 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:01.119095 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:01.119111 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:01.119128 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:01.119144 | orchestrator | 2025-08-29 14:39:01.119161 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 14:39:01.119178 | orchestrator | 2025-08-29 14:39:01.119194 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 14:39:01.119211 | orchestrator | Friday 29 August 2025 14:38:55 +0000 (0:00:01.248) 0:07:55.102 ********* 2025-08-29 14:39:01.119227 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:01.119245 | orchestrator | 2025-08-29 14:39:01.119262 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:39:01.119279 | orchestrator | Friday 29 August 2025 14:38:56 +0000 (0:00:00.972) 0:07:56.074 ********* 2025-08-29 14:39:01.119295 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:01.119312 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:01.119329 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:01.119345 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:01.119362 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:01.119378 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:01.119394 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:01.119411 | orchestrator | 2025-08-29 14:39:01.119428 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:39:01.119494 | orchestrator | Friday 29 August 2025 14:38:56 +0000 (0:00:00.816) 0:07:56.891 ********* 2025-08-29 14:39:01.119513 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:01.119531 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:01.119549 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:01.119566 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:01.119584 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:01.119602 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:01.119619 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:01.119637 | orchestrator | 2025-08-29 14:39:01.119655 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 14:39:01.119673 | orchestrator | Friday 29 August 2025 14:38:58 +0000 (0:00:01.179) 0:07:58.071 ********* 2025-08-29 14:39:01.119691 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:01.119709 | orchestrator | 2025-08-29 14:39:01.119727 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:39:01.119744 | orchestrator | Friday 29 August 2025 14:38:59 +0000 (0:00:01.038) 0:07:59.110 ********* 2025-08-29 14:39:01.119773 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:01.119790 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:01.119809 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:01.119827 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:01.119844 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:01.119861 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:01.119879 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:01.119897 | orchestrator | 2025-08-29 14:39:01.119915 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:39:01.119932 | orchestrator | Friday 29 August 2025 14:38:59 +0000 (0:00:00.849) 0:07:59.959 ********* 2025-08-29 14:39:01.119950 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:01.119968 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:01.119985 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:01.120003 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:01.120021 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:01.120038 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:01.120055 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:01.120073 | orchestrator | 2025-08-29 14:39:01.120091 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:39:01.120110 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 14:39:01.120128 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 14:39:01.120145 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:39:01.120161 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:39:01.120176 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:39:01.120193 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:39:01.120209 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:39:01.120226 | orchestrator | 2025-08-29 14:39:01.120244 | orchestrator | 2025-08-29 14:39:01.120262 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:39:01.120280 | orchestrator | Friday 29 August 2025 14:39:01 +0000 (0:00:01.097) 0:08:01.057 ********* 2025-08-29 14:39:01.120296 | orchestrator | =============================================================================== 2025-08-29 14:39:01.120313 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.70s 2025-08-29 14:39:01.120331 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.66s 2025-08-29 14:39:01.120349 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.16s 2025-08-29 14:39:01.120367 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.01s 2025-08-29 14:39:01.120384 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.91s 2025-08-29 14:39:01.120403 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.76s 2025-08-29 14:39:01.120420 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.99s 2025-08-29 14:39:01.120438 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.69s 2025-08-29 14:39:01.120509 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.30s 2025-08-29 14:39:01.120539 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.12s 2025-08-29 14:39:01.120556 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.45s 2025-08-29 14:39:01.120572 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.33s 2025-08-29 14:39:01.120589 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.29s 2025-08-29 14:39:01.120605 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.63s 2025-08-29 14:39:01.120621 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.63s 2025-08-29 14:39:01.120647 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.22s 2025-08-29 14:39:01.554424 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.44s 2025-08-29 14:39:01.554589 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.88s 2025-08-29 14:39:01.554599 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.78s 2025-08-29 14:39:01.554608 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.78s 2025-08-29 14:39:01.839382 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 14:39:01.839542 | orchestrator | + osism apply network 2025-08-29 14:39:14.437525 | orchestrator | 2025-08-29 14:39:14 | INFO  | Task 15163a10-207b-4346-8543-24ea2d441e9e (network) was prepared for execution. 2025-08-29 14:39:14.437641 | orchestrator | 2025-08-29 14:39:14 | INFO  | It takes a moment until task 15163a10-207b-4346-8543-24ea2d441e9e (network) has been started and output is visible here. 2025-08-29 14:39:43.413654 | orchestrator | 2025-08-29 14:39:43.413776 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 14:39:43.413792 | orchestrator | 2025-08-29 14:39:43.413804 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 14:39:43.413816 | orchestrator | Friday 29 August 2025 14:39:18 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-08-29 14:39:43.413828 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.413840 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.413851 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.413862 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.413873 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.413884 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.413894 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.413905 | orchestrator | 2025-08-29 14:39:43.413916 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 14:39:43.413927 | orchestrator | Friday 29 August 2025 14:39:19 +0000 (0:00:00.705) 0:00:00.980 ********* 2025-08-29 14:39:43.413940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:43.413953 | orchestrator | 2025-08-29 14:39:43.413964 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 14:39:43.413975 | orchestrator | Friday 29 August 2025 14:39:20 +0000 (0:00:01.221) 0:00:02.202 ********* 2025-08-29 14:39:43.413985 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.413996 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.414007 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.414079 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.414092 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.414103 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.414114 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.414124 | orchestrator | 2025-08-29 14:39:43.414135 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 14:39:43.414146 | orchestrator | Friday 29 August 2025 14:39:22 +0000 (0:00:02.020) 0:00:04.222 ********* 2025-08-29 14:39:43.414157 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.414167 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.414179 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.414215 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.414228 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.414240 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.414252 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.414264 | orchestrator | 2025-08-29 14:39:43.414277 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 14:39:43.414289 | orchestrator | Friday 29 August 2025 14:39:24 +0000 (0:00:01.853) 0:00:06.076 ********* 2025-08-29 14:39:43.414320 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 14:39:43.414334 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 14:39:43.414346 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 14:39:43.414357 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 14:39:43.414369 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 14:39:43.414382 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 14:39:43.414394 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 14:39:43.414427 | orchestrator | 2025-08-29 14:39:43.414440 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 14:39:43.414453 | orchestrator | Friday 29 August 2025 14:39:25 +0000 (0:00:00.966) 0:00:07.042 ********* 2025-08-29 14:39:43.414465 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:39:43.414478 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:39:43.414490 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:39:43.414502 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:39:43.414513 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:39:43.414525 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:39:43.414538 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:39:43.414549 | orchestrator | 2025-08-29 14:39:43.414560 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 14:39:43.414570 | orchestrator | Friday 29 August 2025 14:39:28 +0000 (0:00:03.315) 0:00:10.358 ********* 2025-08-29 14:39:43.414581 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:43.414592 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:43.414602 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:43.414612 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:43.414623 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:43.414633 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:43.414644 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:43.414654 | orchestrator | 2025-08-29 14:39:43.414665 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 14:39:43.414676 | orchestrator | Friday 29 August 2025 14:39:30 +0000 (0:00:01.494) 0:00:11.852 ********* 2025-08-29 14:39:43.414686 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:39:43.414697 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:39:43.414707 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:39:43.414718 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:39:43.414728 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:39:43.414738 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:39:43.414749 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:39:43.414759 | orchestrator | 2025-08-29 14:39:43.414770 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 14:39:43.414781 | orchestrator | Friday 29 August 2025 14:39:32 +0000 (0:00:01.860) 0:00:13.713 ********* 2025-08-29 14:39:43.414791 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.414802 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.414812 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.414823 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.414834 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.414844 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.414854 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.414865 | orchestrator | 2025-08-29 14:39:43.414876 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 14:39:43.414912 | orchestrator | Friday 29 August 2025 14:39:33 +0000 (0:00:01.092) 0:00:14.806 ********* 2025-08-29 14:39:43.414924 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:43.414935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:43.414946 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:43.414956 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:43.414967 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:43.414977 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:43.414988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:43.414998 | orchestrator | 2025-08-29 14:39:43.415012 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 14:39:43.415031 | orchestrator | Friday 29 August 2025 14:39:34 +0000 (0:00:00.683) 0:00:15.490 ********* 2025-08-29 14:39:43.415051 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.415071 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.415088 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.415106 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.415132 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.415155 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.415174 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.415193 | orchestrator | 2025-08-29 14:39:43.415211 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 14:39:43.415231 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:02.399) 0:00:17.890 ********* 2025-08-29 14:39:43.415251 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:43.415262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:43.415273 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:43.415284 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:43.415294 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:43.415305 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:43.415316 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 14:39:43.415328 | orchestrator | 2025-08-29 14:39:43.415339 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 14:39:43.415350 | orchestrator | Friday 29 August 2025 14:39:37 +0000 (0:00:00.909) 0:00:18.799 ********* 2025-08-29 14:39:43.415360 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.415371 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:43.415381 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:43.415392 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:43.415402 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:43.415452 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:43.415463 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:43.415474 | orchestrator | 2025-08-29 14:39:43.415485 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 14:39:43.415495 | orchestrator | Friday 29 August 2025 14:39:39 +0000 (0:00:01.696) 0:00:20.496 ********* 2025-08-29 14:39:43.415515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:43.415527 | orchestrator | 2025-08-29 14:39:43.415538 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:39:43.415549 | orchestrator | Friday 29 August 2025 14:39:40 +0000 (0:00:01.305) 0:00:21.802 ********* 2025-08-29 14:39:43.415560 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.415570 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.415581 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.415592 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.415602 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.415613 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.415623 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.415634 | orchestrator | 2025-08-29 14:39:43.415654 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 14:39:43.415665 | orchestrator | Friday 29 August 2025 14:39:41 +0000 (0:00:00.982) 0:00:22.785 ********* 2025-08-29 14:39:43.415676 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:43.415686 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:43.415697 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:43.415707 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:43.415718 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:43.415728 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:43.415739 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:43.415749 | orchestrator | 2025-08-29 14:39:43.415760 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:39:43.415771 | orchestrator | Friday 29 August 2025 14:39:42 +0000 (0:00:00.881) 0:00:23.666 ********* 2025-08-29 14:39:43.415782 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415792 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415803 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415814 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415824 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415835 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415846 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415856 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415866 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415877 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:39:43.415887 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415898 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415909 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415919 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:39:43.415930 | orchestrator | 2025-08-29 14:39:43.415951 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 14:40:00.586443 | orchestrator | Friday 29 August 2025 14:39:43 +0000 (0:00:01.199) 0:00:24.866 ********* 2025-08-29 14:40:00.586561 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:40:00.586578 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:00.586590 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:00.586602 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:00.586613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:00.586624 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:00.586635 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:00.586646 | orchestrator | 2025-08-29 14:40:00.586658 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 14:40:00.586670 | orchestrator | Friday 29 August 2025 14:39:44 +0000 (0:00:00.626) 0:00:25.492 ********* 2025-08-29 14:40:00.586683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-node-1, testbed-manager, testbed-node-5, testbed-node-3, testbed-node-4 2025-08-29 14:40:00.586697 | orchestrator | 2025-08-29 14:40:00.586709 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 14:40:00.586720 | orchestrator | Friday 29 August 2025 14:39:48 +0000 (0:00:04.672) 0:00:30.164 ********* 2025-08-29 14:40:00.586732 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586787 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.586889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.586969 | orchestrator | 2025-08-29 14:40:00.586982 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 14:40:00.587003 | orchestrator | Friday 29 August 2025 14:39:54 +0000 (0:00:06.208) 0:00:36.372 ********* 2025-08-29 14:40:00.587016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587029 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.587108 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:40:00.587120 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.587133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.587146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.587167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:00.587192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:06.947405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:40:06.947501 | orchestrator | 2025-08-29 14:40:06.947509 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 14:40:06.947514 | orchestrator | Friday 29 August 2025 14:40:00 +0000 (0:00:05.658) 0:00:42.031 ********* 2025-08-29 14:40:06.947520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:06.947524 | orchestrator | 2025-08-29 14:40:06.947529 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:40:06.947533 | orchestrator | Friday 29 August 2025 14:40:01 +0000 (0:00:01.284) 0:00:43.316 ********* 2025-08-29 14:40:06.947537 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:06.947542 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:06.947546 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:06.947550 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:06.947553 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:06.947557 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:06.947561 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:06.947565 | orchestrator | 2025-08-29 14:40:06.947568 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:40:06.947572 | orchestrator | Friday 29 August 2025 14:40:03 +0000 (0:00:01.186) 0:00:44.502 ********* 2025-08-29 14:40:06.947576 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947581 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947585 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947589 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947603 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947607 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947611 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947615 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947619 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:40:06.947623 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947627 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947631 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947635 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:06.947642 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947646 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947650 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947654 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947657 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:06.947661 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947665 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947669 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947673 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947681 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:06.947685 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947689 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947693 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947697 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947701 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:06.947704 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:06.947716 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:40:06.947720 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:40:06.947724 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:40:06.947727 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:40:06.947737 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:06.947741 | orchestrator | 2025-08-29 14:40:06.947745 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 14:40:06.947760 | orchestrator | Friday 29 August 2025 14:40:05 +0000 (0:00:02.153) 0:00:46.656 ********* 2025-08-29 14:40:06.947764 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:40:06.947768 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:06.947771 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:06.947775 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:06.947779 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:06.947783 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:06.947786 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:06.947790 | orchestrator | 2025-08-29 14:40:06.947794 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 14:40:06.947798 | orchestrator | Friday 29 August 2025 14:40:05 +0000 (0:00:00.648) 0:00:47.304 ********* 2025-08-29 14:40:06.947802 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:40:06.947805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:06.947809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:06.947813 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:06.947816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:06.947820 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:06.947824 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:06.947828 | orchestrator | 2025-08-29 14:40:06.947832 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:40:06.947837 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:40:06.947841 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:40:06.947845 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:40:06.947849 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:40:06.947853 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:40:06.947859 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:40:06.947863 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:40:06.947872 | orchestrator | 2025-08-29 14:40:06.947876 | orchestrator | 2025-08-29 14:40:06.947880 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:40:06.947884 | orchestrator | Friday 29 August 2025 14:40:06 +0000 (0:00:00.716) 0:00:48.021 ********* 2025-08-29 14:40:06.947888 | orchestrator | =============================================================================== 2025-08-29 14:40:06.947891 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.21s 2025-08-29 14:40:06.947895 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.66s 2025-08-29 14:40:06.947899 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.67s 2025-08-29 14:40:06.947903 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.32s 2025-08-29 14:40:06.947906 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.40s 2025-08-29 14:40:06.947910 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.15s 2025-08-29 14:40:06.947914 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2025-08-29 14:40:06.947917 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.86s 2025-08-29 14:40:06.947921 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.85s 2025-08-29 14:40:06.947925 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.70s 2025-08-29 14:40:06.947929 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2025-08-29 14:40:06.947932 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-08-29 14:40:06.947936 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.28s 2025-08-29 14:40:06.947940 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-08-29 14:40:06.947943 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-08-29 14:40:06.947948 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-08-29 14:40:06.947952 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-08-29 14:40:06.947957 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-08-29 14:40:06.947961 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-08-29 14:40:06.947966 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.91s 2025-08-29 14:40:07.234564 | orchestrator | + osism apply wireguard 2025-08-29 14:40:19.280907 | orchestrator | 2025-08-29 14:40:19 | INFO  | Task 36cdfe10-bc4f-417a-91b5-fbe9f737c7eb (wireguard) was prepared for execution. 2025-08-29 14:40:19.281022 | orchestrator | 2025-08-29 14:40:19 | INFO  | It takes a moment until task 36cdfe10-bc4f-417a-91b5-fbe9f737c7eb (wireguard) has been started and output is visible here. 2025-08-29 14:40:39.426356 | orchestrator | 2025-08-29 14:40:39.426475 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 14:40:39.426492 | orchestrator | 2025-08-29 14:40:39.426506 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 14:40:39.426517 | orchestrator | Friday 29 August 2025 14:40:23 +0000 (0:00:00.226) 0:00:00.226 ********* 2025-08-29 14:40:39.426528 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:39.426540 | orchestrator | 2025-08-29 14:40:39.426552 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 14:40:39.426563 | orchestrator | Friday 29 August 2025 14:40:25 +0000 (0:00:01.662) 0:00:01.889 ********* 2025-08-29 14:40:39.426574 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.426585 | orchestrator | 2025-08-29 14:40:39.426596 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 14:40:39.426607 | orchestrator | Friday 29 August 2025 14:40:31 +0000 (0:00:06.732) 0:00:08.621 ********* 2025-08-29 14:40:39.426646 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.426657 | orchestrator | 2025-08-29 14:40:39.426668 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 14:40:39.426679 | orchestrator | Friday 29 August 2025 14:40:32 +0000 (0:00:00.572) 0:00:09.194 ********* 2025-08-29 14:40:39.426690 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.426701 | orchestrator | 2025-08-29 14:40:39.426712 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 14:40:39.426723 | orchestrator | Friday 29 August 2025 14:40:32 +0000 (0:00:00.405) 0:00:09.599 ********* 2025-08-29 14:40:39.426734 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:39.426744 | orchestrator | 2025-08-29 14:40:39.426755 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 14:40:39.426766 | orchestrator | Friday 29 August 2025 14:40:33 +0000 (0:00:00.538) 0:00:10.138 ********* 2025-08-29 14:40:39.426776 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:39.426787 | orchestrator | 2025-08-29 14:40:39.426798 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 14:40:39.426808 | orchestrator | Friday 29 August 2025 14:40:33 +0000 (0:00:00.529) 0:00:10.667 ********* 2025-08-29 14:40:39.426819 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:39.426829 | orchestrator | 2025-08-29 14:40:39.426840 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 14:40:39.426865 | orchestrator | Friday 29 August 2025 14:40:34 +0000 (0:00:00.439) 0:00:11.107 ********* 2025-08-29 14:40:39.426878 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.426890 | orchestrator | 2025-08-29 14:40:39.426902 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 14:40:39.426914 | orchestrator | Friday 29 August 2025 14:40:35 +0000 (0:00:01.259) 0:00:12.366 ********* 2025-08-29 14:40:39.426927 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:40:39.426940 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.426952 | orchestrator | 2025-08-29 14:40:39.426964 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 14:40:39.426977 | orchestrator | Friday 29 August 2025 14:40:36 +0000 (0:00:00.940) 0:00:13.307 ********* 2025-08-29 14:40:39.426989 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.427001 | orchestrator | 2025-08-29 14:40:39.427013 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 14:40:39.427025 | orchestrator | Friday 29 August 2025 14:40:38 +0000 (0:00:01.689) 0:00:14.996 ********* 2025-08-29 14:40:39.427037 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:39.427049 | orchestrator | 2025-08-29 14:40:39.427061 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:40:39.427074 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:40:39.427087 | orchestrator | 2025-08-29 14:40:39.427099 | orchestrator | 2025-08-29 14:40:39.427112 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:40:39.427124 | orchestrator | Friday 29 August 2025 14:40:39 +0000 (0:00:00.975) 0:00:15.971 ********* 2025-08-29 14:40:39.427137 | orchestrator | =============================================================================== 2025-08-29 14:40:39.427149 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.73s 2025-08-29 14:40:39.427161 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-08-29 14:40:39.427173 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-08-29 14:40:39.427185 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2025-08-29 14:40:39.427197 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-08-29 14:40:39.427209 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-08-29 14:40:39.427221 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-08-29 14:40:39.427243 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-08-29 14:40:39.427254 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-08-29 14:40:39.427265 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-08-29 14:40:39.427276 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-08-29 14:40:39.730163 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 14:40:39.774503 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 14:40:39.774555 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 14:40:39.852132 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 179 0 --:--:-- --:--:-- --:--:-- 181 2025-08-29 14:40:39.871273 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 14:40:41.707387 | orchestrator | 2025-08-29 14:40:41 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 14:40:51.808068 | orchestrator | 2025-08-29 14:40:51 | INFO  | Task 43a84a28-97e8-4662-a955-46a975161117 (workarounds) was prepared for execution. 2025-08-29 14:40:51.808167 | orchestrator | 2025-08-29 14:40:51 | INFO  | It takes a moment until task 43a84a28-97e8-4662-a955-46a975161117 (workarounds) has been started and output is visible here. 2025-08-29 14:41:16.415165 | orchestrator | 2025-08-29 14:41:16.415317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:41:16.415337 | orchestrator | 2025-08-29 14:41:16.415349 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 14:41:16.415361 | orchestrator | Friday 29 August 2025 14:40:55 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-08-29 14:41:16.415373 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415384 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415395 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415405 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415416 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415427 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415437 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 14:41:16.415448 | orchestrator | 2025-08-29 14:41:16.415459 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 14:41:16.415470 | orchestrator | 2025-08-29 14:41:16.415481 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:41:16.415492 | orchestrator | Friday 29 August 2025 14:40:56 +0000 (0:00:00.788) 0:00:00.947 ********* 2025-08-29 14:41:16.415503 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:16.415515 | orchestrator | 2025-08-29 14:41:16.415526 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 14:41:16.415536 | orchestrator | 2025-08-29 14:41:16.415556 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:41:16.415568 | orchestrator | Friday 29 August 2025 14:40:58 +0000 (0:00:02.407) 0:00:03.355 ********* 2025-08-29 14:41:16.415579 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:41:16.415589 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:41:16.415600 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:41:16.415611 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:41:16.415622 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:41:16.415633 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:41:16.415644 | orchestrator | 2025-08-29 14:41:16.415655 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 14:41:16.415665 | orchestrator | 2025-08-29 14:41:16.415696 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 14:41:16.415707 | orchestrator | Friday 29 August 2025 14:41:00 +0000 (0:00:01.763) 0:00:05.118 ********* 2025-08-29 14:41:16.415719 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:41:16.415730 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:41:16.415741 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:41:16.415752 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:41:16.415763 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:41:16.415773 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:41:16.415784 | orchestrator | 2025-08-29 14:41:16.415795 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 14:41:16.415806 | orchestrator | Friday 29 August 2025 14:41:02 +0000 (0:00:01.582) 0:00:06.700 ********* 2025-08-29 14:41:16.415817 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:41:16.415828 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:41:16.415838 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:41:16.415849 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:41:16.415859 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:41:16.415870 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:41:16.415880 | orchestrator | 2025-08-29 14:41:16.415891 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 14:41:16.415902 | orchestrator | Friday 29 August 2025 14:41:05 +0000 (0:00:03.679) 0:00:10.380 ********* 2025-08-29 14:41:16.415912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:16.415923 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:16.415934 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:16.415944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:16.415954 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:16.415965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:16.415976 | orchestrator | 2025-08-29 14:41:16.415987 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 14:41:16.415998 | orchestrator | 2025-08-29 14:41:16.416009 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 14:41:16.416019 | orchestrator | Friday 29 August 2025 14:41:06 +0000 (0:00:00.728) 0:00:11.108 ********* 2025-08-29 14:41:16.416030 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:16.416040 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:41:16.416051 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:41:16.416062 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:41:16.416072 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:41:16.416083 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:41:16.416093 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:41:16.416104 | orchestrator | 2025-08-29 14:41:16.416115 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 14:41:16.416125 | orchestrator | Friday 29 August 2025 14:41:08 +0000 (0:00:01.578) 0:00:12.686 ********* 2025-08-29 14:41:16.416136 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:16.416146 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:41:16.416157 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:41:16.416168 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:41:16.416178 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:41:16.416189 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:41:16.416216 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:41:16.416227 | orchestrator | 2025-08-29 14:41:16.416238 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 14:41:16.416268 | orchestrator | Friday 29 August 2025 14:41:09 +0000 (0:00:01.565) 0:00:14.252 ********* 2025-08-29 14:41:16.416287 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:41:16.416298 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:41:16.416309 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:41:16.416320 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:41:16.416330 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:16.416341 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:41:16.416352 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:41:16.416362 | orchestrator | 2025-08-29 14:41:16.416373 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 14:41:16.416384 | orchestrator | Friday 29 August 2025 14:41:11 +0000 (0:00:01.402) 0:00:15.655 ********* 2025-08-29 14:41:16.416394 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:41:16.416405 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:16.416416 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:41:16.416426 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:41:16.416437 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:41:16.416447 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:41:16.416458 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:41:16.416469 | orchestrator | 2025-08-29 14:41:16.416479 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 14:41:16.416490 | orchestrator | Friday 29 August 2025 14:41:13 +0000 (0:00:01.827) 0:00:17.483 ********* 2025-08-29 14:41:16.416501 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:41:16.416511 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:16.416522 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:16.416533 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:16.416548 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:16.416559 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:16.416570 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:16.416580 | orchestrator | 2025-08-29 14:41:16.416591 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 14:41:16.416602 | orchestrator | 2025-08-29 14:41:16.416613 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 14:41:16.416624 | orchestrator | Friday 29 August 2025 14:41:13 +0000 (0:00:00.628) 0:00:18.111 ********* 2025-08-29 14:41:16.416634 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:41:16.416645 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:16.416656 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:41:16.416667 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:41:16.416678 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:41:16.416688 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:41:16.416699 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:41:16.416709 | orchestrator | 2025-08-29 14:41:16.416720 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:41:16.416732 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:41:16.416744 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:16.416755 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:16.416766 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:16.416777 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:16.416787 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:16.416798 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:16.416815 | orchestrator | 2025-08-29 14:41:16.416826 | orchestrator | 2025-08-29 14:41:16.416837 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:41:16.416848 | orchestrator | Friday 29 August 2025 14:41:16 +0000 (0:00:02.659) 0:00:20.771 ********* 2025-08-29 14:41:16.416859 | orchestrator | =============================================================================== 2025-08-29 14:41:16.416869 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2025-08-29 14:41:16.416880 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2025-08-29 14:41:16.416891 | orchestrator | Apply netplan configuration --------------------------------------------- 2.41s 2025-08-29 14:41:16.416902 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.83s 2025-08-29 14:41:16.416912 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-08-29 14:41:16.416923 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.58s 2025-08-29 14:41:16.416934 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.58s 2025-08-29 14:41:16.416945 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.57s 2025-08-29 14:41:16.416955 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.40s 2025-08-29 14:41:16.416966 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-08-29 14:41:16.416977 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-08-29 14:41:16.416994 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-08-29 14:41:17.096815 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:41:29.038856 | orchestrator | 2025-08-29 14:41:29 | INFO  | Task 11ebce6d-ad94-4b48-b135-25e6ad2894fe (reboot) was prepared for execution. 2025-08-29 14:41:29.038970 | orchestrator | 2025-08-29 14:41:29 | INFO  | It takes a moment until task 11ebce6d-ad94-4b48-b135-25e6ad2894fe (reboot) has been started and output is visible here. 2025-08-29 14:41:38.912174 | orchestrator | 2025-08-29 14:41:38.912300 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:41:38.912316 | orchestrator | 2025-08-29 14:41:38.912329 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:41:38.912341 | orchestrator | Friday 29 August 2025 14:41:33 +0000 (0:00:00.215) 0:00:00.215 ********* 2025-08-29 14:41:38.912352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:38.912364 | orchestrator | 2025-08-29 14:41:38.912375 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:41:38.912386 | orchestrator | Friday 29 August 2025 14:41:33 +0000 (0:00:00.098) 0:00:00.313 ********* 2025-08-29 14:41:38.912397 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:41:38.912408 | orchestrator | 2025-08-29 14:41:38.912418 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:41:38.912429 | orchestrator | Friday 29 August 2025 14:41:34 +0000 (0:00:00.968) 0:00:01.281 ********* 2025-08-29 14:41:38.912440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:38.912451 | orchestrator | 2025-08-29 14:41:38.912475 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:41:38.912487 | orchestrator | 2025-08-29 14:41:38.912498 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:41:38.912508 | orchestrator | Friday 29 August 2025 14:41:34 +0000 (0:00:00.108) 0:00:01.390 ********* 2025-08-29 14:41:38.912519 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:38.912530 | orchestrator | 2025-08-29 14:41:38.912541 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:41:38.912552 | orchestrator | Friday 29 August 2025 14:41:34 +0000 (0:00:00.095) 0:00:01.485 ********* 2025-08-29 14:41:38.912583 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:41:38.912595 | orchestrator | 2025-08-29 14:41:38.912606 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:41:38.912617 | orchestrator | Friday 29 August 2025 14:41:34 +0000 (0:00:00.696) 0:00:02.182 ********* 2025-08-29 14:41:38.912627 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:38.912638 | orchestrator | 2025-08-29 14:41:38.912649 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:41:38.912659 | orchestrator | 2025-08-29 14:41:38.912670 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:41:38.912681 | orchestrator | Friday 29 August 2025 14:41:35 +0000 (0:00:00.105) 0:00:02.288 ********* 2025-08-29 14:41:38.912692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:38.912702 | orchestrator | 2025-08-29 14:41:38.912713 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:41:38.912726 | orchestrator | Friday 29 August 2025 14:41:35 +0000 (0:00:00.218) 0:00:02.506 ********* 2025-08-29 14:41:38.912738 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:41:38.912750 | orchestrator | 2025-08-29 14:41:38.912762 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:41:38.912774 | orchestrator | Friday 29 August 2025 14:41:35 +0000 (0:00:00.614) 0:00:03.120 ********* 2025-08-29 14:41:38.912787 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:38.912799 | orchestrator | 2025-08-29 14:41:38.912814 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:41:38.912826 | orchestrator | 2025-08-29 14:41:38.912839 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:41:38.912851 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.114) 0:00:03.234 ********* 2025-08-29 14:41:38.912863 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:38.912875 | orchestrator | 2025-08-29 14:41:38.912886 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:41:38.912899 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.127) 0:00:03.362 ********* 2025-08-29 14:41:38.912912 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:41:38.912924 | orchestrator | 2025-08-29 14:41:38.912936 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:41:38.912948 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.627) 0:00:03.990 ********* 2025-08-29 14:41:38.912960 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:38.912972 | orchestrator | 2025-08-29 14:41:38.912984 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:41:38.912996 | orchestrator | 2025-08-29 14:41:38.913008 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:41:38.913019 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.116) 0:00:04.107 ********* 2025-08-29 14:41:38.913032 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:38.913044 | orchestrator | 2025-08-29 14:41:38.913056 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:41:38.913068 | orchestrator | Friday 29 August 2025 14:41:37 +0000 (0:00:00.104) 0:00:04.211 ********* 2025-08-29 14:41:38.913080 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:41:38.913092 | orchestrator | 2025-08-29 14:41:38.913103 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:41:38.913114 | orchestrator | Friday 29 August 2025 14:41:37 +0000 (0:00:00.648) 0:00:04.860 ********* 2025-08-29 14:41:38.913124 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:38.913135 | orchestrator | 2025-08-29 14:41:38.913146 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:41:38.913156 | orchestrator | 2025-08-29 14:41:38.913167 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:41:38.913178 | orchestrator | Friday 29 August 2025 14:41:37 +0000 (0:00:00.107) 0:00:04.968 ********* 2025-08-29 14:41:38.913188 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:38.913222 | orchestrator | 2025-08-29 14:41:38.913234 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:41:38.913245 | orchestrator | Friday 29 August 2025 14:41:37 +0000 (0:00:00.095) 0:00:05.063 ********* 2025-08-29 14:41:38.913256 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:41:38.913267 | orchestrator | 2025-08-29 14:41:38.913277 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:41:38.913288 | orchestrator | Friday 29 August 2025 14:41:38 +0000 (0:00:00.661) 0:00:05.724 ********* 2025-08-29 14:41:38.913315 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:38.913327 | orchestrator | 2025-08-29 14:41:38.913338 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:41:38.913350 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:38.913361 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:38.913372 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:38.913383 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:38.913393 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:38.913404 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:41:38.913415 | orchestrator | 2025-08-29 14:41:38.913426 | orchestrator | 2025-08-29 14:41:38.913437 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:41:38.913455 | orchestrator | Friday 29 August 2025 14:41:38 +0000 (0:00:00.037) 0:00:05.762 ********* 2025-08-29 14:41:38.913467 | orchestrator | =============================================================================== 2025-08-29 14:41:38.913477 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.22s 2025-08-29 14:41:38.913488 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2025-08-29 14:41:38.913499 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-08-29 14:41:39.335810 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:41:51.293857 | orchestrator | 2025-08-29 14:41:51 | INFO  | Task 082e2850-4309-473a-96d0-df0a0b59bc00 (wait-for-connection) was prepared for execution. 2025-08-29 14:41:51.293996 | orchestrator | 2025-08-29 14:41:51 | INFO  | It takes a moment until task 082e2850-4309-473a-96d0-df0a0b59bc00 (wait-for-connection) has been started and output is visible here. 2025-08-29 14:42:07.377104 | orchestrator | 2025-08-29 14:42:07.377266 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 14:42:07.377286 | orchestrator | 2025-08-29 14:42:07.377299 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 14:42:07.377312 | orchestrator | Friday 29 August 2025 14:41:55 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-08-29 14:42:07.377323 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:07.377335 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:07.377346 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:07.377357 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:07.377368 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:07.377378 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:07.377389 | orchestrator | 2025-08-29 14:42:07.377400 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:07.377412 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:07.377455 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:07.377467 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:07.377478 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:07.377489 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:07.377500 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:07.377511 | orchestrator | 2025-08-29 14:42:07.377522 | orchestrator | 2025-08-29 14:42:07.377532 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:07.377544 | orchestrator | Friday 29 August 2025 14:42:07 +0000 (0:00:11.706) 0:00:11.975 ********* 2025-08-29 14:42:07.377554 | orchestrator | =============================================================================== 2025-08-29 14:42:07.377565 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.71s 2025-08-29 14:42:07.735496 | orchestrator | + osism apply hddtemp 2025-08-29 14:42:19.727533 | orchestrator | 2025-08-29 14:42:19 | INFO  | Task 7cc0bbc5-5a6a-4f26-89c6-b44360301851 (hddtemp) was prepared for execution. 2025-08-29 14:42:19.727631 | orchestrator | 2025-08-29 14:42:19 | INFO  | It takes a moment until task 7cc0bbc5-5a6a-4f26-89c6-b44360301851 (hddtemp) has been started and output is visible here. 2025-08-29 14:42:46.710695 | orchestrator | 2025-08-29 14:42:46.710828 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 14:42:46.710846 | orchestrator | 2025-08-29 14:42:46.710860 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 14:42:46.710872 | orchestrator | Friday 29 August 2025 14:42:24 +0000 (0:00:00.292) 0:00:00.292 ********* 2025-08-29 14:42:46.710883 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:46.710930 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:46.710943 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:46.710954 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:46.710965 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:46.710976 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:46.710987 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:46.710998 | orchestrator | 2025-08-29 14:42:46.711009 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 14:42:46.711020 | orchestrator | Friday 29 August 2025 14:42:25 +0000 (0:00:00.828) 0:00:01.121 ********* 2025-08-29 14:42:46.711050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:42:46.711064 | orchestrator | 2025-08-29 14:42:46.711076 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 14:42:46.711087 | orchestrator | Friday 29 August 2025 14:42:26 +0000 (0:00:01.252) 0:00:02.373 ********* 2025-08-29 14:42:46.711097 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:46.711152 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:46.711164 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:46.711175 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:46.711186 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:46.711197 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:46.711209 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:46.711222 | orchestrator | 2025-08-29 14:42:46.711234 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 14:42:46.711246 | orchestrator | Friday 29 August 2025 14:42:28 +0000 (0:00:01.820) 0:00:04.194 ********* 2025-08-29 14:42:46.711299 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:46.711315 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:46.711327 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:46.711339 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:46.711351 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:46.711364 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:46.711376 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:46.711388 | orchestrator | 2025-08-29 14:42:46.711400 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 14:42:46.711412 | orchestrator | Friday 29 August 2025 14:42:29 +0000 (0:00:01.137) 0:00:05.332 ********* 2025-08-29 14:42:46.711425 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:46.711437 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:46.711449 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:46.711461 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:46.711473 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:46.711485 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:46.711510 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:46.711534 | orchestrator | 2025-08-29 14:42:46.711547 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 14:42:46.711559 | orchestrator | Friday 29 August 2025 14:42:30 +0000 (0:00:01.155) 0:00:06.487 ********* 2025-08-29 14:42:46.711571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:46.711582 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:46.711593 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:46.711604 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:46.711615 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:46.711625 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:46.711636 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:46.711647 | orchestrator | 2025-08-29 14:42:46.711658 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 14:42:46.711669 | orchestrator | Friday 29 August 2025 14:42:31 +0000 (0:00:00.860) 0:00:07.347 ********* 2025-08-29 14:42:46.711680 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:46.711691 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:46.711701 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:46.711712 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:46.711723 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:46.711733 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:46.711744 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:46.711755 | orchestrator | 2025-08-29 14:42:46.711777 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 14:42:46.711788 | orchestrator | Friday 29 August 2025 14:42:43 +0000 (0:00:11.795) 0:00:19.143 ********* 2025-08-29 14:42:46.711799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:42:46.711810 | orchestrator | 2025-08-29 14:42:46.711821 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 14:42:46.711832 | orchestrator | Friday 29 August 2025 14:42:44 +0000 (0:00:01.377) 0:00:20.521 ********* 2025-08-29 14:42:46.711843 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:46.711854 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:46.711865 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:46.711875 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:46.711886 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:46.711896 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:46.711907 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:46.711918 | orchestrator | 2025-08-29 14:42:46.711929 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:46.711941 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:42:46.711980 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:46.711993 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:46.712003 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:46.712015 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:46.712026 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:46.712042 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:46.712053 | orchestrator | 2025-08-29 14:42:46.712064 | orchestrator | 2025-08-29 14:42:46.712076 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:46.712087 | orchestrator | Friday 29 August 2025 14:42:46 +0000 (0:00:01.931) 0:00:22.452 ********* 2025-08-29 14:42:46.712097 | orchestrator | =============================================================================== 2025-08-29 14:42:46.712129 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.80s 2025-08-29 14:42:46.712140 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2025-08-29 14:42:46.712151 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.82s 2025-08-29 14:42:46.712162 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-08-29 14:42:46.712172 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2025-08-29 14:42:46.712197 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.16s 2025-08-29 14:42:46.712207 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2025-08-29 14:42:46.712218 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2025-08-29 14:42:46.712229 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.83s 2025-08-29 14:42:46.993635 | orchestrator | ++ semver 9.2.0 7.1.1 2025-08-29 14:42:47.040394 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:42:47.040495 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 14:43:00.823767 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:43:00.823895 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:43:00.823914 | orchestrator | + local max_attempts=60 2025-08-29 14:43:00.823927 | orchestrator | + local name=ceph-ansible 2025-08-29 14:43:00.823938 | orchestrator | + local attempt_num=1 2025-08-29 14:43:00.823950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:00.853666 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:00.853766 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:00.853783 | orchestrator | + sleep 5 2025-08-29 14:43:05.857599 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:05.889223 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:05.889285 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:05.889295 | orchestrator | + sleep 5 2025-08-29 14:43:10.892167 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:10.928856 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:10.928932 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:10.928945 | orchestrator | + sleep 5 2025-08-29 14:43:15.933362 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:15.969790 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:15.969958 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:15.969974 | orchestrator | + sleep 5 2025-08-29 14:43:20.973664 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:21.013513 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:21.013589 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:21.013603 | orchestrator | + sleep 5 2025-08-29 14:43:26.019087 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:26.056084 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:26.056166 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:26.056180 | orchestrator | + sleep 5 2025-08-29 14:43:31.061107 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:31.099831 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:31.099927 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:31.099941 | orchestrator | + sleep 5 2025-08-29 14:43:36.106613 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:36.137946 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:36.138099 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:36.138113 | orchestrator | + sleep 5 2025-08-29 14:43:41.142544 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:41.172784 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:41.176364 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:41.176439 | orchestrator | + sleep 5 2025-08-29 14:43:46.180000 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:46.218371 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:46.218474 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:46.218489 | orchestrator | + sleep 5 2025-08-29 14:43:51.223191 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:51.268254 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:51.268374 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:51.268394 | orchestrator | + sleep 5 2025-08-29 14:43:56.273110 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:43:56.314820 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:43:56.314903 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:43:56.314917 | orchestrator | + sleep 5 2025-08-29 14:44:01.319874 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:01.361973 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:01.390106 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:01.390189 | orchestrator | + sleep 5 2025-08-29 14:44:06.366784 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:06.405223 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:06.405313 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:44:06.405329 | orchestrator | + local max_attempts=60 2025-08-29 14:44:06.405341 | orchestrator | + local name=kolla-ansible 2025-08-29 14:44:06.405353 | orchestrator | + local attempt_num=1 2025-08-29 14:44:06.406097 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:44:06.435882 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:06.435928 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:44:06.435941 | orchestrator | + local max_attempts=60 2025-08-29 14:44:06.435953 | orchestrator | + local name=osism-ansible 2025-08-29 14:44:06.435964 | orchestrator | + local attempt_num=1 2025-08-29 14:44:06.437016 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:44:06.476980 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:06.477018 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:44:06.477030 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:44:06.671323 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 14:44:06.838291 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 14:44:07.029597 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 14:44:07.200094 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 14:44:07.200190 | orchestrator | + osism apply gather-facts 2025-08-29 14:44:19.250229 | orchestrator | 2025-08-29 14:44:19 | INFO  | Task f6abe22c-c655-4183-9435-428ddafc0292 (gather-facts) was prepared for execution. 2025-08-29 14:44:19.250371 | orchestrator | 2025-08-29 14:44:19 | INFO  | It takes a moment until task f6abe22c-c655-4183-9435-428ddafc0292 (gather-facts) has been started and output is visible here. 2025-08-29 14:44:32.804515 | orchestrator | 2025-08-29 14:44:32.804635 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:44:32.804653 | orchestrator | 2025-08-29 14:44:32.804665 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:44:32.804677 | orchestrator | Friday 29 August 2025 14:44:23 +0000 (0:00:00.243) 0:00:00.243 ********* 2025-08-29 14:44:32.804688 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:44:32.804700 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:44:32.804711 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:44:32.804722 | orchestrator | ok: [testbed-manager] 2025-08-29 14:44:32.804733 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:44:32.804745 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:44:32.804765 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:44:32.804794 | orchestrator | 2025-08-29 14:44:32.804814 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:44:32.804832 | orchestrator | 2025-08-29 14:44:32.804849 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:44:32.804867 | orchestrator | Friday 29 August 2025 14:44:31 +0000 (0:00:08.584) 0:00:08.827 ********* 2025-08-29 14:44:32.804887 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:44:32.804905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:44:32.804925 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:44:32.804944 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:44:32.804963 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:44:32.804978 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:44:32.804988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:44:32.804999 | orchestrator | 2025-08-29 14:44:32.805010 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:44:32.805022 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805034 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805045 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805056 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805069 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805082 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805094 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:44:32.805106 | orchestrator | 2025-08-29 14:44:32.805119 | orchestrator | 2025-08-29 14:44:32.805131 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:44:32.805144 | orchestrator | Friday 29 August 2025 14:44:32 +0000 (0:00:00.556) 0:00:09.384 ********* 2025-08-29 14:44:32.805156 | orchestrator | =============================================================================== 2025-08-29 14:44:32.805168 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.58s 2025-08-29 14:44:32.805224 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-08-29 14:44:33.147050 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 14:44:33.160064 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 14:44:33.172809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 14:44:33.192017 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 14:44:33.214868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 14:44:33.230648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 14:44:33.252255 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 14:44:33.273013 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 14:44:33.293140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 14:44:33.310891 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 14:44:33.333162 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 14:44:33.359509 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 14:44:33.381757 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 14:44:33.404750 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 14:44:33.423450 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 14:44:33.434741 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 14:44:33.444331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 14:44:33.453911 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 14:44:33.476076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 14:44:33.493525 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 14:44:33.512317 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 14:44:33.610033 | orchestrator | ok: Runtime: 0:23:13.718200 2025-08-29 14:44:33.700258 | 2025-08-29 14:44:33.700440 | TASK [Deploy services] 2025-08-29 14:44:34.234561 | orchestrator | skipping: Conditional result was False 2025-08-29 14:44:34.252871 | 2025-08-29 14:44:34.253033 | TASK [Deploy in a nutshell] 2025-08-29 14:44:34.982378 | orchestrator | 2025-08-29 14:44:34.982584 | orchestrator | # PULL IMAGES 2025-08-29 14:44:34.982621 | orchestrator | 2025-08-29 14:44:34.982646 | orchestrator | + set -e 2025-08-29 14:44:34.982676 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:44:34.982707 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:44:34.982732 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:44:34.982790 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:44:34.982825 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:44:34.982850 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:44:34.982871 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:44:34.982901 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:44:34.982923 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:44:34.983013 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:44:34.983035 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:44:34.983065 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:44:34.983085 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 14:44:34.983108 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 14:44:34.983129 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:44:34.983151 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:44:34.983191 | orchestrator | ++ export ARA=false 2025-08-29 14:44:34.983263 | orchestrator | ++ ARA=false 2025-08-29 14:44:34.983285 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:44:34.983306 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:44:34.983326 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:44:34.983346 | orchestrator | ++ TEMPEST=false 2025-08-29 14:44:34.983366 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:44:34.983386 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:44:34.983406 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:44:34.983427 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 14:44:34.983447 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:44:34.983485 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:44:34.983504 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:44:34.983526 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:44:34.983546 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:44:34.983565 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:44:34.983583 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:44:34.983692 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:44:34.983719 | orchestrator | + echo 2025-08-29 14:44:34.983740 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 14:44:34.983759 | orchestrator | + echo 2025-08-29 14:44:34.983793 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 14:44:35.052313 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 14:44:35.052800 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 14:44:36.862125 | orchestrator | 2025-08-29 14:44:36 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 14:44:47.120256 | orchestrator | 2025-08-29 14:44:47 | INFO  | Task 40b08c80-be0a-47be-a74d-9161626c0ada (pull-images) was prepared for execution. 2025-08-29 14:44:47.120376 | orchestrator | 2025-08-29 14:44:47 | INFO  | Task 40b08c80-be0a-47be-a74d-9161626c0ada is running in background. No more output. Check ARA for logs. 2025-08-29 14:44:49.488798 | orchestrator | 2025-08-29 14:44:49 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 14:44:59.652777 | orchestrator | 2025-08-29 14:44:59 | INFO  | Task 178b1dce-a3d4-427a-b7f8-b57a93778740 (wipe-partitions) was prepared for execution. 2025-08-29 14:44:59.652855 | orchestrator | 2025-08-29 14:44:59 | INFO  | It takes a moment until task 178b1dce-a3d4-427a-b7f8-b57a93778740 (wipe-partitions) has been started and output is visible here. 2025-08-29 14:45:13.149659 | orchestrator | 2025-08-29 14:45:13.149766 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 14:45:13.149783 | orchestrator | 2025-08-29 14:45:13.149795 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 14:45:13.149813 | orchestrator | Friday 29 August 2025 14:45:04 +0000 (0:00:00.133) 0:00:00.133 ********* 2025-08-29 14:45:13.149824 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:45:13.149836 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:45:13.149847 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:45:13.149858 | orchestrator | 2025-08-29 14:45:13.149870 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 14:45:13.149903 | orchestrator | Friday 29 August 2025 14:45:04 +0000 (0:00:00.558) 0:00:00.691 ********* 2025-08-29 14:45:13.149915 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:13.149926 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:13.149936 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:13.149950 | orchestrator | 2025-08-29 14:45:13.149962 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 14:45:13.149973 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.264) 0:00:00.956 ********* 2025-08-29 14:45:13.149984 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:13.149994 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:13.150005 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:13.150061 | orchestrator | 2025-08-29 14:45:13.150074 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 14:45:13.150085 | orchestrator | Friday 29 August 2025 14:45:05 +0000 (0:00:00.700) 0:00:01.656 ********* 2025-08-29 14:45:13.150097 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:13.150107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:13.150118 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:13.150129 | orchestrator | 2025-08-29 14:45:13.150140 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 14:45:13.150150 | orchestrator | Friday 29 August 2025 14:45:06 +0000 (0:00:00.256) 0:00:01.912 ********* 2025-08-29 14:45:13.150161 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:45:13.150177 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:45:13.150220 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:45:13.150242 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:45:13.150260 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:45:13.150277 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:45:13.150296 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:45:13.150316 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:45:13.150335 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:45:13.150353 | orchestrator | 2025-08-29 14:45:13.150371 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 14:45:13.150390 | orchestrator | Friday 29 August 2025 14:45:07 +0000 (0:00:01.160) 0:00:03.073 ********* 2025-08-29 14:45:13.150409 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:45:13.150428 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:45:13.150448 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:45:13.150468 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:45:13.150487 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:45:13.150505 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:45:13.150523 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:45:13.150542 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:45:13.150560 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:45:13.150579 | orchestrator | 2025-08-29 14:45:13.150598 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 14:45:13.150617 | orchestrator | Friday 29 August 2025 14:45:08 +0000 (0:00:01.296) 0:00:04.370 ********* 2025-08-29 14:45:13.150636 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:45:13.150654 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:45:13.150674 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:45:13.150693 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:45:13.150711 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:45:13.150729 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:45:13.150747 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:45:13.150764 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:45:13.150804 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:45:13.150824 | orchestrator | 2025-08-29 14:45:13.150843 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 14:45:13.150861 | orchestrator | Friday 29 August 2025 14:45:11 +0000 (0:00:03.144) 0:00:07.514 ********* 2025-08-29 14:45:13.150880 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:45:13.150899 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:45:13.150918 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:45:13.150936 | orchestrator | 2025-08-29 14:45:13.150955 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 14:45:13.150974 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.587) 0:00:08.101 ********* 2025-08-29 14:45:13.150992 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:45:13.151012 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:45:13.151031 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:45:13.151042 | orchestrator | 2025-08-29 14:45:13.151053 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:45:13.151064 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:13.151078 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:13.151107 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:13.151118 | orchestrator | 2025-08-29 14:45:13.151129 | orchestrator | 2025-08-29 14:45:13.151140 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:45:13.151150 | orchestrator | Friday 29 August 2025 14:45:12 +0000 (0:00:00.587) 0:00:08.688 ********* 2025-08-29 14:45:13.151161 | orchestrator | =============================================================================== 2025-08-29 14:45:13.151171 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.14s 2025-08-29 14:45:13.151182 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-08-29 14:45:13.151192 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-08-29 14:45:13.151234 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-08-29 14:45:13.151247 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-08-29 14:45:13.151258 | orchestrator | Request device events from the kernel ----------------------------------- 0.59s 2025-08-29 14:45:13.151269 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2025-08-29 14:45:13.151279 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-08-29 14:45:13.151290 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-08-29 14:45:25.286326 | orchestrator | 2025-08-29 14:45:25 | INFO  | Task 0ad4616f-7774-4601-9655-253f6a031d57 (facts) was prepared for execution. 2025-08-29 14:45:25.287252 | orchestrator | 2025-08-29 14:45:25 | INFO  | It takes a moment until task 0ad4616f-7774-4601-9655-253f6a031d57 (facts) has been started and output is visible here. 2025-08-29 14:45:37.582515 | orchestrator | 2025-08-29 14:45:37.582618 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:45:37.582636 | orchestrator | 2025-08-29 14:45:37.582648 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:45:37.582661 | orchestrator | Friday 29 August 2025 14:45:29 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-08-29 14:45:37.582673 | orchestrator | ok: [testbed-manager] 2025-08-29 14:45:37.582685 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:45:37.582696 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:45:37.582706 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:45:37.582745 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:37.582757 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:37.582768 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:37.582779 | orchestrator | 2025-08-29 14:45:37.582791 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:45:37.582802 | orchestrator | Friday 29 August 2025 14:45:30 +0000 (0:00:01.094) 0:00:01.371 ********* 2025-08-29 14:45:37.582813 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:45:37.582826 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:45:37.582836 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:45:37.582847 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:45:37.582857 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:37.582867 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:37.582878 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.582888 | orchestrator | 2025-08-29 14:45:37.582899 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:45:37.582908 | orchestrator | 2025-08-29 14:45:37.582934 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:45:37.582946 | orchestrator | Friday 29 August 2025 14:45:31 +0000 (0:00:01.266) 0:00:02.638 ********* 2025-08-29 14:45:37.582956 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:45:37.582966 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:45:37.582977 | orchestrator | ok: [testbed-manager] 2025-08-29 14:45:37.582988 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:45:37.583000 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:37.583009 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:37.583019 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:37.583029 | orchestrator | 2025-08-29 14:45:37.583040 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:45:37.583051 | orchestrator | 2025-08-29 14:45:37.583063 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:45:37.583077 | orchestrator | Friday 29 August 2025 14:45:36 +0000 (0:00:04.948) 0:00:07.586 ********* 2025-08-29 14:45:37.583090 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:45:37.583104 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:45:37.583117 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:45:37.583131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:45:37.583145 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:37.583158 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:37.583171 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:37.583185 | orchestrator | 2025-08-29 14:45:37.583196 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:45:37.583208 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583259 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583270 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583282 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583292 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583302 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583314 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:45:37.583324 | orchestrator | 2025-08-29 14:45:37.583334 | orchestrator | 2025-08-29 14:45:37.583345 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:45:37.583371 | orchestrator | Friday 29 August 2025 14:45:37 +0000 (0:00:00.525) 0:00:08.112 ********* 2025-08-29 14:45:37.583381 | orchestrator | =============================================================================== 2025-08-29 14:45:37.583390 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.95s 2025-08-29 14:45:37.583400 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-08-29 14:45:37.583409 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-08-29 14:45:37.583418 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-08-29 14:45:39.765756 | orchestrator | 2025-08-29 14:45:39 | INFO  | Task 22a2263b-12e3-4845-b1ab-dd52effd6c36 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 14:45:39.765832 | orchestrator | 2025-08-29 14:45:39 | INFO  | It takes a moment until task 22a2263b-12e3-4845-b1ab-dd52effd6c36 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 14:45:51.554717 | orchestrator | 2025-08-29 14:45:51.554867 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:45:51.554886 | orchestrator | 2025-08-29 14:45:51.554901 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:45:51.554917 | orchestrator | Friday 29 August 2025 14:45:43 +0000 (0:00:00.328) 0:00:00.328 ********* 2025-08-29 14:45:51.554933 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:51.554947 | orchestrator | 2025-08-29 14:45:51.554961 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:45:51.554975 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.231) 0:00:00.560 ********* 2025-08-29 14:45:51.554991 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:51.555007 | orchestrator | 2025-08-29 14:45:51.555023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555037 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.220) 0:00:00.780 ********* 2025-08-29 14:45:51.555053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:45:51.555069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:45:51.555097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:45:51.555107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:45:51.555116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:45:51.555125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:45:51.555134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:45:51.555143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:45:51.555152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:45:51.555160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:45:51.555169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:45:51.555177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:45:51.555186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:45:51.555195 | orchestrator | 2025-08-29 14:45:51.555205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555215 | orchestrator | Friday 29 August 2025 14:45:44 +0000 (0:00:00.366) 0:00:01.147 ********* 2025-08-29 14:45:51.555254 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555271 | orchestrator | 2025-08-29 14:45:51.555320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555336 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.469) 0:00:01.616 ********* 2025-08-29 14:45:51.555351 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555366 | orchestrator | 2025-08-29 14:45:51.555380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555394 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.204) 0:00:01.821 ********* 2025-08-29 14:45:51.555410 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555427 | orchestrator | 2025-08-29 14:45:51.555446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555464 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.196) 0:00:02.017 ********* 2025-08-29 14:45:51.555482 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555500 | orchestrator | 2025-08-29 14:45:51.555526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555545 | orchestrator | Friday 29 August 2025 14:45:45 +0000 (0:00:00.195) 0:00:02.212 ********* 2025-08-29 14:45:51.555564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555582 | orchestrator | 2025-08-29 14:45:51.555599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555617 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.180) 0:00:02.393 ********* 2025-08-29 14:45:51.555635 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555653 | orchestrator | 2025-08-29 14:45:51.555670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555687 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.217) 0:00:02.611 ********* 2025-08-29 14:45:51.555704 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555720 | orchestrator | 2025-08-29 14:45:51.555737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555754 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.224) 0:00:02.835 ********* 2025-08-29 14:45:51.555770 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.555787 | orchestrator | 2025-08-29 14:45:51.555802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555820 | orchestrator | Friday 29 August 2025 14:45:46 +0000 (0:00:00.208) 0:00:03.044 ********* 2025-08-29 14:45:51.555837 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78) 2025-08-29 14:45:51.555858 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78) 2025-08-29 14:45:51.555873 | orchestrator | 2025-08-29 14:45:51.555889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.555907 | orchestrator | Friday 29 August 2025 14:45:47 +0000 (0:00:00.409) 0:00:03.453 ********* 2025-08-29 14:45:51.555953 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714) 2025-08-29 14:45:51.555966 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714) 2025-08-29 14:45:51.555978 | orchestrator | 2025-08-29 14:45:51.555989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.556007 | orchestrator | Friday 29 August 2025 14:45:47 +0000 (0:00:00.418) 0:00:03.872 ********* 2025-08-29 14:45:51.556019 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c) 2025-08-29 14:45:51.556030 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c) 2025-08-29 14:45:51.556041 | orchestrator | 2025-08-29 14:45:51.556052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.556062 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.622) 0:00:04.494 ********* 2025-08-29 14:45:51.556073 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34) 2025-08-29 14:45:51.556094 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34) 2025-08-29 14:45:51.556105 | orchestrator | 2025-08-29 14:45:51.556116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:51.556127 | orchestrator | Friday 29 August 2025 14:45:48 +0000 (0:00:00.631) 0:00:05.125 ********* 2025-08-29 14:45:51.556137 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:45:51.556148 | orchestrator | 2025-08-29 14:45:51.556159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556170 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.732) 0:00:05.857 ********* 2025-08-29 14:45:51.556180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:45:51.556192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:45:51.556208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:45:51.556264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:45:51.556282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:45:51.556300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:45:51.556318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:45:51.556336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:45:51.556356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:45:51.556373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:45:51.556390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:45:51.556401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:45:51.556412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:45:51.556423 | orchestrator | 2025-08-29 14:45:51.556433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556444 | orchestrator | Friday 29 August 2025 14:45:49 +0000 (0:00:00.409) 0:00:06.267 ********* 2025-08-29 14:45:51.556455 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556466 | orchestrator | 2025-08-29 14:45:51.556476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556487 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.197) 0:00:06.465 ********* 2025-08-29 14:45:51.556498 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556508 | orchestrator | 2025-08-29 14:45:51.556519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556530 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.196) 0:00:06.661 ********* 2025-08-29 14:45:51.556540 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556551 | orchestrator | 2025-08-29 14:45:51.556562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556572 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.201) 0:00:06.862 ********* 2025-08-29 14:45:51.556583 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556593 | orchestrator | 2025-08-29 14:45:51.556604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556615 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.195) 0:00:07.058 ********* 2025-08-29 14:45:51.556626 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556636 | orchestrator | 2025-08-29 14:45:51.556647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556668 | orchestrator | Friday 29 August 2025 14:45:50 +0000 (0:00:00.205) 0:00:07.263 ********* 2025-08-29 14:45:51.556678 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556689 | orchestrator | 2025-08-29 14:45:51.556700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556711 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.206) 0:00:07.470 ********* 2025-08-29 14:45:51.556721 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:51.556732 | orchestrator | 2025-08-29 14:45:51.556743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:51.556754 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.215) 0:00:07.685 ********* 2025-08-29 14:45:51.556775 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.416199 | orchestrator | 2025-08-29 14:45:59.417206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:59.417265 | orchestrator | Friday 29 August 2025 14:45:51 +0000 (0:00:00.213) 0:00:07.899 ********* 2025-08-29 14:45:59.417279 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:45:59.417293 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:45:59.417305 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:45:59.417317 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:45:59.417328 | orchestrator | 2025-08-29 14:45:59.417339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:59.417372 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.991) 0:00:08.890 ********* 2025-08-29 14:45:59.417384 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417395 | orchestrator | 2025-08-29 14:45:59.417406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:59.417417 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.214) 0:00:09.105 ********* 2025-08-29 14:45:59.417428 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417438 | orchestrator | 2025-08-29 14:45:59.417449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:59.417460 | orchestrator | Friday 29 August 2025 14:45:52 +0000 (0:00:00.224) 0:00:09.329 ********* 2025-08-29 14:45:59.417471 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417482 | orchestrator | 2025-08-29 14:45:59.417493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:45:59.417504 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.235) 0:00:09.564 ********* 2025-08-29 14:45:59.417514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417525 | orchestrator | 2025-08-29 14:45:59.417536 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:45:59.417547 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.250) 0:00:09.815 ********* 2025-08-29 14:45:59.417558 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:45:59.417569 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:45:59.417580 | orchestrator | 2025-08-29 14:45:59.417591 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:45:59.417602 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.233) 0:00:10.048 ********* 2025-08-29 14:45:59.417613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417623 | orchestrator | 2025-08-29 14:45:59.417634 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:45:59.417645 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.133) 0:00:10.182 ********* 2025-08-29 14:45:59.417656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417667 | orchestrator | 2025-08-29 14:45:59.417683 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:45:59.417701 | orchestrator | Friday 29 August 2025 14:45:53 +0000 (0:00:00.147) 0:00:10.330 ********* 2025-08-29 14:45:59.417720 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.417738 | orchestrator | 2025-08-29 14:45:59.417787 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:45:59.417806 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.135) 0:00:10.465 ********* 2025-08-29 14:45:59.417821 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:59.417837 | orchestrator | 2025-08-29 14:45:59.417855 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:45:59.417871 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.127) 0:00:10.592 ********* 2025-08-29 14:45:59.417889 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dda150a8-39d5-5493-abc9-b03fdb7d62e3'}}) 2025-08-29 14:45:59.417907 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c0ce2805-49d2-5cc8-844e-183b484fa1c4'}}) 2025-08-29 14:45:59.417924 | orchestrator | 2025-08-29 14:45:59.417941 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:45:59.417959 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.163) 0:00:10.756 ********* 2025-08-29 14:45:59.417977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dda150a8-39d5-5493-abc9-b03fdb7d62e3'}})  2025-08-29 14:45:59.418012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c0ce2805-49d2-5cc8-844e-183b484fa1c4'}})  2025-08-29 14:45:59.418103 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.418123 | orchestrator | 2025-08-29 14:45:59.418141 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:45:59.418160 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.164) 0:00:10.921 ********* 2025-08-29 14:45:59.418178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dda150a8-39d5-5493-abc9-b03fdb7d62e3'}})  2025-08-29 14:45:59.418196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c0ce2805-49d2-5cc8-844e-183b484fa1c4'}})  2025-08-29 14:45:59.418214 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.418278 | orchestrator | 2025-08-29 14:45:59.418298 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:45:59.418317 | orchestrator | Friday 29 August 2025 14:45:54 +0000 (0:00:00.156) 0:00:11.078 ********* 2025-08-29 14:45:59.418410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dda150a8-39d5-5493-abc9-b03fdb7d62e3'}})  2025-08-29 14:45:59.418430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c0ce2805-49d2-5cc8-844e-183b484fa1c4'}})  2025-08-29 14:45:59.418450 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.418469 | orchestrator | 2025-08-29 14:45:59.418517 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:45:59.418583 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.380) 0:00:11.458 ********* 2025-08-29 14:45:59.418601 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:59.418619 | orchestrator | 2025-08-29 14:45:59.418636 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:45:59.418654 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.150) 0:00:11.608 ********* 2025-08-29 14:45:59.418673 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:59.418692 | orchestrator | 2025-08-29 14:45:59.418712 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:45:59.418731 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.140) 0:00:11.749 ********* 2025-08-29 14:45:59.418750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.418769 | orchestrator | 2025-08-29 14:45:59.418788 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:45:59.418807 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.145) 0:00:11.894 ********* 2025-08-29 14:45:59.418826 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.418845 | orchestrator | 2025-08-29 14:45:59.418864 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:45:59.418904 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.148) 0:00:12.042 ********* 2025-08-29 14:45:59.418924 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.418943 | orchestrator | 2025-08-29 14:45:59.418963 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:45:59.418982 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.136) 0:00:12.179 ********* 2025-08-29 14:45:59.419000 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:45:59.419019 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:45:59.419039 | orchestrator |  "sdb": { 2025-08-29 14:45:59.419059 | orchestrator |  "osd_lvm_uuid": "dda150a8-39d5-5493-abc9-b03fdb7d62e3" 2025-08-29 14:45:59.419079 | orchestrator |  }, 2025-08-29 14:45:59.419098 | orchestrator |  "sdc": { 2025-08-29 14:45:59.419117 | orchestrator |  "osd_lvm_uuid": "c0ce2805-49d2-5cc8-844e-183b484fa1c4" 2025-08-29 14:45:59.419136 | orchestrator |  } 2025-08-29 14:45:59.419155 | orchestrator |  } 2025-08-29 14:45:59.419174 | orchestrator | } 2025-08-29 14:45:59.419193 | orchestrator | 2025-08-29 14:45:59.419212 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:45:59.419390 | orchestrator | Friday 29 August 2025 14:45:55 +0000 (0:00:00.155) 0:00:12.334 ********* 2025-08-29 14:45:59.419412 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.419432 | orchestrator | 2025-08-29 14:45:59.419452 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:45:59.419472 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.154) 0:00:12.488 ********* 2025-08-29 14:45:59.419533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.419553 | orchestrator | 2025-08-29 14:45:59.419572 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:45:59.419589 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.137) 0:00:12.626 ********* 2025-08-29 14:45:59.419608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:59.419627 | orchestrator | 2025-08-29 14:45:59.419645 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:45:59.419664 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.141) 0:00:12.768 ********* 2025-08-29 14:45:59.419684 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 14:45:59.419703 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:45:59.419722 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:45:59.419741 | orchestrator |  "sdb": { 2025-08-29 14:45:59.419760 | orchestrator |  "osd_lvm_uuid": "dda150a8-39d5-5493-abc9-b03fdb7d62e3" 2025-08-29 14:45:59.419779 | orchestrator |  }, 2025-08-29 14:45:59.419798 | orchestrator |  "sdc": { 2025-08-29 14:45:59.419818 | orchestrator |  "osd_lvm_uuid": "c0ce2805-49d2-5cc8-844e-183b484fa1c4" 2025-08-29 14:45:59.419837 | orchestrator |  } 2025-08-29 14:45:59.419856 | orchestrator |  }, 2025-08-29 14:45:59.419875 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:45:59.419894 | orchestrator |  { 2025-08-29 14:45:59.419913 | orchestrator |  "data": "osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3", 2025-08-29 14:45:59.419933 | orchestrator |  "data_vg": "ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3" 2025-08-29 14:45:59.419952 | orchestrator |  }, 2025-08-29 14:45:59.419971 | orchestrator |  { 2025-08-29 14:45:59.419990 | orchestrator |  "data": "osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4", 2025-08-29 14:45:59.420009 | orchestrator |  "data_vg": "ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4" 2025-08-29 14:45:59.420027 | orchestrator |  } 2025-08-29 14:45:59.420043 | orchestrator |  ] 2025-08-29 14:45:59.420060 | orchestrator |  } 2025-08-29 14:45:59.420076 | orchestrator | } 2025-08-29 14:45:59.420093 | orchestrator | 2025-08-29 14:45:59.420110 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:45:59.420127 | orchestrator | Friday 29 August 2025 14:45:56 +0000 (0:00:00.225) 0:00:12.994 ********* 2025-08-29 14:45:59.420158 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:59.420175 | orchestrator | 2025-08-29 14:45:59.420192 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:45:59.420208 | orchestrator | 2025-08-29 14:45:59.420294 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:45:59.420320 | orchestrator | Friday 29 August 2025 14:45:58 +0000 (0:00:02.266) 0:00:15.260 ********* 2025-08-29 14:45:59.420338 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:45:59.420357 | orchestrator | 2025-08-29 14:45:59.420376 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:45:59.420394 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.257) 0:00:15.517 ********* 2025-08-29 14:45:59.420415 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:59.420433 | orchestrator | 2025-08-29 14:45:59.420451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:45:59.420479 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.239) 0:00:15.757 ********* 2025-08-29 14:46:07.487214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:46:07.487392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:46:07.487409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:46:07.487421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:46:07.487432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:46:07.487443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:46:07.487454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:46:07.487465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:46:07.487476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:46:07.487487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:46:07.487524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:46:07.487535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:46:07.487546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:46:07.487557 | orchestrator | 2025-08-29 14:46:07.487576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487588 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.376) 0:00:16.134 ********* 2025-08-29 14:46:07.487601 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487613 | orchestrator | 2025-08-29 14:46:07.487624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487635 | orchestrator | Friday 29 August 2025 14:45:59 +0000 (0:00:00.211) 0:00:16.346 ********* 2025-08-29 14:46:07.487646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487657 | orchestrator | 2025-08-29 14:46:07.487668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487679 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.195) 0:00:16.541 ********* 2025-08-29 14:46:07.487689 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487700 | orchestrator | 2025-08-29 14:46:07.487711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487723 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.185) 0:00:16.726 ********* 2025-08-29 14:46:07.487736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487748 | orchestrator | 2025-08-29 14:46:07.487789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487802 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.207) 0:00:16.934 ********* 2025-08-29 14:46:07.487814 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487826 | orchestrator | 2025-08-29 14:46:07.487839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487851 | orchestrator | Friday 29 August 2025 14:46:00 +0000 (0:00:00.219) 0:00:17.154 ********* 2025-08-29 14:46:07.487864 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487876 | orchestrator | 2025-08-29 14:46:07.487888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487901 | orchestrator | Friday 29 August 2025 14:46:01 +0000 (0:00:00.649) 0:00:17.803 ********* 2025-08-29 14:46:07.487913 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487926 | orchestrator | 2025-08-29 14:46:07.487938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.487951 | orchestrator | Friday 29 August 2025 14:46:01 +0000 (0:00:00.225) 0:00:18.029 ********* 2025-08-29 14:46:07.487963 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.487976 | orchestrator | 2025-08-29 14:46:07.487988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.488001 | orchestrator | Friday 29 August 2025 14:46:01 +0000 (0:00:00.191) 0:00:18.220 ********* 2025-08-29 14:46:07.488014 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399) 2025-08-29 14:46:07.488029 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399) 2025-08-29 14:46:07.488041 | orchestrator | 2025-08-29 14:46:07.488054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.488066 | orchestrator | Friday 29 August 2025 14:46:02 +0000 (0:00:00.463) 0:00:18.684 ********* 2025-08-29 14:46:07.488079 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b) 2025-08-29 14:46:07.488092 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b) 2025-08-29 14:46:07.488104 | orchestrator | 2025-08-29 14:46:07.488115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.488126 | orchestrator | Friday 29 August 2025 14:46:02 +0000 (0:00:00.478) 0:00:19.162 ********* 2025-08-29 14:46:07.488136 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95) 2025-08-29 14:46:07.488147 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95) 2025-08-29 14:46:07.488158 | orchestrator | 2025-08-29 14:46:07.488169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.488179 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.498) 0:00:19.660 ********* 2025-08-29 14:46:07.488211 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d) 2025-08-29 14:46:07.488223 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d) 2025-08-29 14:46:07.488259 | orchestrator | 2025-08-29 14:46:07.488271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:07.488282 | orchestrator | Friday 29 August 2025 14:46:03 +0000 (0:00:00.436) 0:00:20.096 ********* 2025-08-29 14:46:07.488293 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:46:07.488304 | orchestrator | 2025-08-29 14:46:07.488314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488334 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:00.359) 0:00:20.456 ********* 2025-08-29 14:46:07.488345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:46:07.488356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:46:07.488375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:46:07.488387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:46:07.488397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:46:07.488408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:46:07.488418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:46:07.488429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:46:07.488440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:46:07.488450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:46:07.488461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:46:07.488471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:46:07.488482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:46:07.488493 | orchestrator | 2025-08-29 14:46:07.488503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488514 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:00.380) 0:00:20.836 ********* 2025-08-29 14:46:07.488525 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488536 | orchestrator | 2025-08-29 14:46:07.488546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488557 | orchestrator | Friday 29 August 2025 14:46:04 +0000 (0:00:00.191) 0:00:21.027 ********* 2025-08-29 14:46:07.488568 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488579 | orchestrator | 2025-08-29 14:46:07.488589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488600 | orchestrator | Friday 29 August 2025 14:46:05 +0000 (0:00:00.658) 0:00:21.685 ********* 2025-08-29 14:46:07.488611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488621 | orchestrator | 2025-08-29 14:46:07.488632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488643 | orchestrator | Friday 29 August 2025 14:46:05 +0000 (0:00:00.208) 0:00:21.894 ********* 2025-08-29 14:46:07.488654 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488664 | orchestrator | 2025-08-29 14:46:07.488675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488686 | orchestrator | Friday 29 August 2025 14:46:05 +0000 (0:00:00.202) 0:00:22.097 ********* 2025-08-29 14:46:07.488697 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488708 | orchestrator | 2025-08-29 14:46:07.488718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488729 | orchestrator | Friday 29 August 2025 14:46:05 +0000 (0:00:00.194) 0:00:22.291 ********* 2025-08-29 14:46:07.488740 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488750 | orchestrator | 2025-08-29 14:46:07.488761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488772 | orchestrator | Friday 29 August 2025 14:46:06 +0000 (0:00:00.210) 0:00:22.502 ********* 2025-08-29 14:46:07.488782 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488793 | orchestrator | 2025-08-29 14:46:07.488804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488814 | orchestrator | Friday 29 August 2025 14:46:06 +0000 (0:00:00.198) 0:00:22.701 ********* 2025-08-29 14:46:07.488825 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488836 | orchestrator | 2025-08-29 14:46:07.488847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488864 | orchestrator | Friday 29 August 2025 14:46:06 +0000 (0:00:00.207) 0:00:22.908 ********* 2025-08-29 14:46:07.488875 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:46:07.488887 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:46:07.488898 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:46:07.488909 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:46:07.488920 | orchestrator | 2025-08-29 14:46:07.488930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:07.488941 | orchestrator | Friday 29 August 2025 14:46:07 +0000 (0:00:00.703) 0:00:23.612 ********* 2025-08-29 14:46:07.488952 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:07.488963 | orchestrator | 2025-08-29 14:46:07.488980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:13.731376 | orchestrator | Friday 29 August 2025 14:46:07 +0000 (0:00:00.221) 0:00:23.834 ********* 2025-08-29 14:46:13.731485 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731499 | orchestrator | 2025-08-29 14:46:13.731510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:13.731520 | orchestrator | Friday 29 August 2025 14:46:07 +0000 (0:00:00.183) 0:00:24.017 ********* 2025-08-29 14:46:13.731529 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731538 | orchestrator | 2025-08-29 14:46:13.731547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:13.731556 | orchestrator | Friday 29 August 2025 14:46:07 +0000 (0:00:00.201) 0:00:24.218 ********* 2025-08-29 14:46:13.731565 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731574 | orchestrator | 2025-08-29 14:46:13.731605 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:46:13.731615 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.209) 0:00:24.428 ********* 2025-08-29 14:46:13.731624 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:46:13.731633 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:46:13.731642 | orchestrator | 2025-08-29 14:46:13.731651 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:46:13.731659 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.426) 0:00:24.855 ********* 2025-08-29 14:46:13.731668 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731677 | orchestrator | 2025-08-29 14:46:13.731686 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:46:13.731695 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.143) 0:00:24.998 ********* 2025-08-29 14:46:13.731704 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731713 | orchestrator | 2025-08-29 14:46:13.731722 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:46:13.731731 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.153) 0:00:25.152 ********* 2025-08-29 14:46:13.731739 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731748 | orchestrator | 2025-08-29 14:46:13.731756 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:46:13.731765 | orchestrator | Friday 29 August 2025 14:46:08 +0000 (0:00:00.142) 0:00:25.295 ********* 2025-08-29 14:46:13.731774 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:13.731785 | orchestrator | 2025-08-29 14:46:13.731793 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:46:13.731802 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.173) 0:00:25.468 ********* 2025-08-29 14:46:13.731812 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '346e0f34-2e25-5bf0-9181-de3fb405aafc'}}) 2025-08-29 14:46:13.731821 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ca3f02ac-b393-504d-bf7e-2b1a4059feca'}}) 2025-08-29 14:46:13.731830 | orchestrator | 2025-08-29 14:46:13.731839 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:46:13.731873 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.195) 0:00:25.664 ********* 2025-08-29 14:46:13.731885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '346e0f34-2e25-5bf0-9181-de3fb405aafc'}})  2025-08-29 14:46:13.731897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ca3f02ac-b393-504d-bf7e-2b1a4059feca'}})  2025-08-29 14:46:13.731907 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731917 | orchestrator | 2025-08-29 14:46:13.731927 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:46:13.731937 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.209) 0:00:25.874 ********* 2025-08-29 14:46:13.731947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '346e0f34-2e25-5bf0-9181-de3fb405aafc'}})  2025-08-29 14:46:13.731957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ca3f02ac-b393-504d-bf7e-2b1a4059feca'}})  2025-08-29 14:46:13.731967 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.731977 | orchestrator | 2025-08-29 14:46:13.731987 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:46:13.731997 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.147) 0:00:26.021 ********* 2025-08-29 14:46:13.732007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '346e0f34-2e25-5bf0-9181-de3fb405aafc'}})  2025-08-29 14:46:13.732017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ca3f02ac-b393-504d-bf7e-2b1a4059feca'}})  2025-08-29 14:46:13.732026 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732037 | orchestrator | 2025-08-29 14:46:13.732046 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:46:13.732056 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.152) 0:00:26.174 ********* 2025-08-29 14:46:13.732066 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:13.732076 | orchestrator | 2025-08-29 14:46:13.732086 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:46:13.732095 | orchestrator | Friday 29 August 2025 14:46:09 +0000 (0:00:00.131) 0:00:26.306 ********* 2025-08-29 14:46:13.732105 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:13.732115 | orchestrator | 2025-08-29 14:46:13.732125 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:46:13.732134 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.131) 0:00:26.438 ********* 2025-08-29 14:46:13.732144 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732154 | orchestrator | 2025-08-29 14:46:13.732181 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:46:13.732191 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.135) 0:00:26.573 ********* 2025-08-29 14:46:13.732201 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732210 | orchestrator | 2025-08-29 14:46:13.732220 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:46:13.732230 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.331) 0:00:26.904 ********* 2025-08-29 14:46:13.732259 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732268 | orchestrator | 2025-08-29 14:46:13.732277 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:46:13.732285 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.144) 0:00:27.049 ********* 2025-08-29 14:46:13.732294 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:46:13.732303 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:46:13.732311 | orchestrator |  "sdb": { 2025-08-29 14:46:13.732321 | orchestrator |  "osd_lvm_uuid": "346e0f34-2e25-5bf0-9181-de3fb405aafc" 2025-08-29 14:46:13.732330 | orchestrator |  }, 2025-08-29 14:46:13.732338 | orchestrator |  "sdc": { 2025-08-29 14:46:13.732347 | orchestrator |  "osd_lvm_uuid": "ca3f02ac-b393-504d-bf7e-2b1a4059feca" 2025-08-29 14:46:13.732362 | orchestrator |  } 2025-08-29 14:46:13.732371 | orchestrator |  } 2025-08-29 14:46:13.732380 | orchestrator | } 2025-08-29 14:46:13.732389 | orchestrator | 2025-08-29 14:46:13.732397 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:46:13.732405 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.141) 0:00:27.190 ********* 2025-08-29 14:46:13.732414 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732422 | orchestrator | 2025-08-29 14:46:13.732436 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:46:13.732445 | orchestrator | Friday 29 August 2025 14:46:10 +0000 (0:00:00.145) 0:00:27.335 ********* 2025-08-29 14:46:13.732454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732462 | orchestrator | 2025-08-29 14:46:13.732471 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:46:13.732479 | orchestrator | Friday 29 August 2025 14:46:11 +0000 (0:00:00.153) 0:00:27.489 ********* 2025-08-29 14:46:13.732488 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:13.732496 | orchestrator | 2025-08-29 14:46:13.732505 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:46:13.732513 | orchestrator | Friday 29 August 2025 14:46:11 +0000 (0:00:00.157) 0:00:27.646 ********* 2025-08-29 14:46:13.732521 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 14:46:13.732530 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:46:13.732539 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:46:13.732547 | orchestrator |  "sdb": { 2025-08-29 14:46:13.732556 | orchestrator |  "osd_lvm_uuid": "346e0f34-2e25-5bf0-9181-de3fb405aafc" 2025-08-29 14:46:13.732565 | orchestrator |  }, 2025-08-29 14:46:13.732578 | orchestrator |  "sdc": { 2025-08-29 14:46:13.732586 | orchestrator |  "osd_lvm_uuid": "ca3f02ac-b393-504d-bf7e-2b1a4059feca" 2025-08-29 14:46:13.732595 | orchestrator |  } 2025-08-29 14:46:13.732604 | orchestrator |  }, 2025-08-29 14:46:13.732612 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:46:13.732621 | orchestrator |  { 2025-08-29 14:46:13.732630 | orchestrator |  "data": "osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc", 2025-08-29 14:46:13.732639 | orchestrator |  "data_vg": "ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc" 2025-08-29 14:46:13.732647 | orchestrator |  }, 2025-08-29 14:46:13.732656 | orchestrator |  { 2025-08-29 14:46:13.732664 | orchestrator |  "data": "osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca", 2025-08-29 14:46:13.732673 | orchestrator |  "data_vg": "ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca" 2025-08-29 14:46:13.732681 | orchestrator |  } 2025-08-29 14:46:13.732690 | orchestrator |  ] 2025-08-29 14:46:13.732698 | orchestrator |  } 2025-08-29 14:46:13.732707 | orchestrator | } 2025-08-29 14:46:13.732716 | orchestrator | 2025-08-29 14:46:13.732724 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:46:13.732733 | orchestrator | Friday 29 August 2025 14:46:11 +0000 (0:00:00.192) 0:00:27.839 ********* 2025-08-29 14:46:13.732741 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:46:13.732750 | orchestrator | 2025-08-29 14:46:13.732759 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:46:13.732767 | orchestrator | 2025-08-29 14:46:13.732776 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:46:13.732784 | orchestrator | Friday 29 August 2025 14:46:12 +0000 (0:00:01.020) 0:00:28.860 ********* 2025-08-29 14:46:13.732793 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:46:13.732801 | orchestrator | 2025-08-29 14:46:13.732810 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:46:13.732819 | orchestrator | Friday 29 August 2025 14:46:12 +0000 (0:00:00.424) 0:00:29.284 ********* 2025-08-29 14:46:13.732827 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:13.732841 | orchestrator | 2025-08-29 14:46:13.732849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:13.732858 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.452) 0:00:29.736 ********* 2025-08-29 14:46:13.732867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:46:13.732875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:46:13.732884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:46:13.732892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:46:13.732901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:46:13.732909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:46:13.732923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:46:22.162832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:46:22.162928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:46:22.162942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:46:22.162954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:46:22.162965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:46:22.162976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:46:22.162987 | orchestrator | 2025-08-29 14:46:22.162999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163011 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.335) 0:00:30.072 ********* 2025-08-29 14:46:22.163022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163034 | orchestrator | 2025-08-29 14:46:22.163044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163055 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.204) 0:00:30.277 ********* 2025-08-29 14:46:22.163065 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163082 | orchestrator | 2025-08-29 14:46:22.163101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163119 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.193) 0:00:30.470 ********* 2025-08-29 14:46:22.163137 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163155 | orchestrator | 2025-08-29 14:46:22.163173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163190 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.183) 0:00:30.654 ********* 2025-08-29 14:46:22.163208 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163227 | orchestrator | 2025-08-29 14:46:22.163320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163343 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.174) 0:00:30.828 ********* 2025-08-29 14:46:22.163355 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163365 | orchestrator | 2025-08-29 14:46:22.163376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163387 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.210) 0:00:31.039 ********* 2025-08-29 14:46:22.163397 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163408 | orchestrator | 2025-08-29 14:46:22.163418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163429 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.181) 0:00:31.220 ********* 2025-08-29 14:46:22.163440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163450 | orchestrator | 2025-08-29 14:46:22.163486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163497 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.184) 0:00:31.405 ********* 2025-08-29 14:46:22.163508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.163518 | orchestrator | 2025-08-29 14:46:22.163546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163558 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.205) 0:00:31.610 ********* 2025-08-29 14:46:22.163568 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661) 2025-08-29 14:46:22.163581 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661) 2025-08-29 14:46:22.163592 | orchestrator | 2025-08-29 14:46:22.163603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163614 | orchestrator | Friday 29 August 2025 14:46:15 +0000 (0:00:00.660) 0:00:32.271 ********* 2025-08-29 14:46:22.163624 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9) 2025-08-29 14:46:22.163635 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9) 2025-08-29 14:46:22.163646 | orchestrator | 2025-08-29 14:46:22.163656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163667 | orchestrator | Friday 29 August 2025 14:46:16 +0000 (0:00:00.866) 0:00:33.137 ********* 2025-08-29 14:46:22.163677 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b) 2025-08-29 14:46:22.163688 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b) 2025-08-29 14:46:22.163699 | orchestrator | 2025-08-29 14:46:22.163710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163720 | orchestrator | Friday 29 August 2025 14:46:17 +0000 (0:00:00.459) 0:00:33.596 ********* 2025-08-29 14:46:22.163731 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598) 2025-08-29 14:46:22.163742 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598) 2025-08-29 14:46:22.163752 | orchestrator | 2025-08-29 14:46:22.163763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:46:22.163774 | orchestrator | Friday 29 August 2025 14:46:17 +0000 (0:00:00.495) 0:00:34.092 ********* 2025-08-29 14:46:22.163784 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:46:22.163795 | orchestrator | 2025-08-29 14:46:22.163805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.163816 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.389) 0:00:34.482 ********* 2025-08-29 14:46:22.163847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:46:22.163858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:46:22.163869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:46:22.163880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:46:22.163890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:46:22.163901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:46:22.163912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:46:22.163922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:46:22.163933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:46:22.163952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:46:22.163963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:46:22.163974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:46:22.163984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:46:22.163995 | orchestrator | 2025-08-29 14:46:22.164006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164017 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.388) 0:00:34.870 ********* 2025-08-29 14:46:22.164027 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164038 | orchestrator | 2025-08-29 14:46:22.164049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164059 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.184) 0:00:35.055 ********* 2025-08-29 14:46:22.164070 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164080 | orchestrator | 2025-08-29 14:46:22.164091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164102 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:00.184) 0:00:35.240 ********* 2025-08-29 14:46:22.164113 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164123 | orchestrator | 2025-08-29 14:46:22.164134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164145 | orchestrator | Friday 29 August 2025 14:46:19 +0000 (0:00:00.184) 0:00:35.424 ********* 2025-08-29 14:46:22.164155 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164166 | orchestrator | 2025-08-29 14:46:22.164177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164188 | orchestrator | Friday 29 August 2025 14:46:19 +0000 (0:00:00.181) 0:00:35.606 ********* 2025-08-29 14:46:22.164198 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164209 | orchestrator | 2025-08-29 14:46:22.164220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164230 | orchestrator | Friday 29 August 2025 14:46:19 +0000 (0:00:00.183) 0:00:35.789 ********* 2025-08-29 14:46:22.164269 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164290 | orchestrator | 2025-08-29 14:46:22.164309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164326 | orchestrator | Friday 29 August 2025 14:46:20 +0000 (0:00:00.650) 0:00:36.440 ********* 2025-08-29 14:46:22.164345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164360 | orchestrator | 2025-08-29 14:46:22.164370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164381 | orchestrator | Friday 29 August 2025 14:46:20 +0000 (0:00:00.215) 0:00:36.655 ********* 2025-08-29 14:46:22.164391 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164402 | orchestrator | 2025-08-29 14:46:22.164413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164423 | orchestrator | Friday 29 August 2025 14:46:20 +0000 (0:00:00.251) 0:00:36.906 ********* 2025-08-29 14:46:22.164434 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:46:22.164445 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:46:22.164456 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:46:22.164466 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:46:22.164477 | orchestrator | 2025-08-29 14:46:22.164487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164498 | orchestrator | Friday 29 August 2025 14:46:21 +0000 (0:00:00.758) 0:00:37.665 ********* 2025-08-29 14:46:22.164508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164519 | orchestrator | 2025-08-29 14:46:22.164530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164540 | orchestrator | Friday 29 August 2025 14:46:21 +0000 (0:00:00.211) 0:00:37.877 ********* 2025-08-29 14:46:22.164558 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164569 | orchestrator | 2025-08-29 14:46:22.164580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164590 | orchestrator | Friday 29 August 2025 14:46:21 +0000 (0:00:00.201) 0:00:38.079 ********* 2025-08-29 14:46:22.164601 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164611 | orchestrator | 2025-08-29 14:46:22.164622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:46:22.164632 | orchestrator | Friday 29 August 2025 14:46:21 +0000 (0:00:00.217) 0:00:38.296 ********* 2025-08-29 14:46:22.164650 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.164661 | orchestrator | 2025-08-29 14:46:22.164672 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:46:22.164689 | orchestrator | Friday 29 August 2025 14:46:22 +0000 (0:00:00.207) 0:00:38.504 ********* 2025-08-29 14:46:26.400762 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:46:26.400894 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:46:26.400911 | orchestrator | 2025-08-29 14:46:26.400925 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:46:26.400936 | orchestrator | Friday 29 August 2025 14:46:22 +0000 (0:00:00.165) 0:00:38.670 ********* 2025-08-29 14:46:26.400948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.400960 | orchestrator | 2025-08-29 14:46:26.400971 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:46:26.400982 | orchestrator | Friday 29 August 2025 14:46:22 +0000 (0:00:00.126) 0:00:38.797 ********* 2025-08-29 14:46:26.400993 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401004 | orchestrator | 2025-08-29 14:46:26.401014 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:46:26.401026 | orchestrator | Friday 29 August 2025 14:46:22 +0000 (0:00:00.132) 0:00:38.930 ********* 2025-08-29 14:46:26.401036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401047 | orchestrator | 2025-08-29 14:46:26.401058 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:46:26.401069 | orchestrator | Friday 29 August 2025 14:46:22 +0000 (0:00:00.145) 0:00:39.075 ********* 2025-08-29 14:46:26.401079 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:26.401092 | orchestrator | 2025-08-29 14:46:26.401103 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:46:26.401114 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.334) 0:00:39.410 ********* 2025-08-29 14:46:26.401126 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}}) 2025-08-29 14:46:26.401139 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd9c5dbd3-dfd6-59a8-a565-791b79996791'}}) 2025-08-29 14:46:26.401150 | orchestrator | 2025-08-29 14:46:26.401161 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:46:26.401172 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.163) 0:00:39.574 ********* 2025-08-29 14:46:26.401183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}})  2025-08-29 14:46:26.401196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd9c5dbd3-dfd6-59a8-a565-791b79996791'}})  2025-08-29 14:46:26.401207 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401218 | orchestrator | 2025-08-29 14:46:26.401278 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:46:26.401293 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.145) 0:00:39.719 ********* 2025-08-29 14:46:26.401305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}})  2025-08-29 14:46:26.401318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd9c5dbd3-dfd6-59a8-a565-791b79996791'}})  2025-08-29 14:46:26.401354 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401366 | orchestrator | 2025-08-29 14:46:26.401377 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:46:26.401388 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.189) 0:00:39.909 ********* 2025-08-29 14:46:26.401398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}})  2025-08-29 14:46:26.401409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd9c5dbd3-dfd6-59a8-a565-791b79996791'}})  2025-08-29 14:46:26.401420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401431 | orchestrator | 2025-08-29 14:46:26.401441 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:46:26.401452 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.148) 0:00:40.057 ********* 2025-08-29 14:46:26.401463 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:26.401474 | orchestrator | 2025-08-29 14:46:26.401484 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:46:26.401495 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.131) 0:00:40.189 ********* 2025-08-29 14:46:26.401506 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:26.401516 | orchestrator | 2025-08-29 14:46:26.401527 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:46:26.401537 | orchestrator | Friday 29 August 2025 14:46:23 +0000 (0:00:00.160) 0:00:40.349 ********* 2025-08-29 14:46:26.401548 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401559 | orchestrator | 2025-08-29 14:46:26.401570 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:46:26.401580 | orchestrator | Friday 29 August 2025 14:46:24 +0000 (0:00:00.128) 0:00:40.478 ********* 2025-08-29 14:46:26.401591 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401602 | orchestrator | 2025-08-29 14:46:26.401612 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:46:26.401623 | orchestrator | Friday 29 August 2025 14:46:24 +0000 (0:00:00.139) 0:00:40.617 ********* 2025-08-29 14:46:26.401633 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401644 | orchestrator | 2025-08-29 14:46:26.401655 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:46:26.401666 | orchestrator | Friday 29 August 2025 14:46:24 +0000 (0:00:00.135) 0:00:40.753 ********* 2025-08-29 14:46:26.401676 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:46:26.401687 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:46:26.401698 | orchestrator |  "sdb": { 2025-08-29 14:46:26.401710 | orchestrator |  "osd_lvm_uuid": "bbd8d281-36ff-5086-a3ca-2bb41bb9eed5" 2025-08-29 14:46:26.401742 | orchestrator |  }, 2025-08-29 14:46:26.401754 | orchestrator |  "sdc": { 2025-08-29 14:46:26.401765 | orchestrator |  "osd_lvm_uuid": "d9c5dbd3-dfd6-59a8-a565-791b79996791" 2025-08-29 14:46:26.401776 | orchestrator |  } 2025-08-29 14:46:26.401787 | orchestrator |  } 2025-08-29 14:46:26.401798 | orchestrator | } 2025-08-29 14:46:26.401810 | orchestrator | 2025-08-29 14:46:26.401820 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:46:26.401831 | orchestrator | Friday 29 August 2025 14:46:24 +0000 (0:00:00.143) 0:00:40.897 ********* 2025-08-29 14:46:26.401842 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401853 | orchestrator | 2025-08-29 14:46:26.401863 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:46:26.401874 | orchestrator | Friday 29 August 2025 14:46:24 +0000 (0:00:00.148) 0:00:41.045 ********* 2025-08-29 14:46:26.401885 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401895 | orchestrator | 2025-08-29 14:46:26.401906 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:46:26.401926 | orchestrator | Friday 29 August 2025 14:46:25 +0000 (0:00:00.382) 0:00:41.428 ********* 2025-08-29 14:46:26.401936 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:26.401947 | orchestrator | 2025-08-29 14:46:26.401958 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:46:26.401968 | orchestrator | Friday 29 August 2025 14:46:25 +0000 (0:00:00.150) 0:00:41.578 ********* 2025-08-29 14:46:26.401979 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 14:46:26.401989 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:46:26.402001 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:46:26.402011 | orchestrator |  "sdb": { 2025-08-29 14:46:26.402083 | orchestrator |  "osd_lvm_uuid": "bbd8d281-36ff-5086-a3ca-2bb41bb9eed5" 2025-08-29 14:46:26.402095 | orchestrator |  }, 2025-08-29 14:46:26.402106 | orchestrator |  "sdc": { 2025-08-29 14:46:26.402117 | orchestrator |  "osd_lvm_uuid": "d9c5dbd3-dfd6-59a8-a565-791b79996791" 2025-08-29 14:46:26.402127 | orchestrator |  } 2025-08-29 14:46:26.402138 | orchestrator |  }, 2025-08-29 14:46:26.402148 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:46:26.402159 | orchestrator |  { 2025-08-29 14:46:26.402170 | orchestrator |  "data": "osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5", 2025-08-29 14:46:26.402181 | orchestrator |  "data_vg": "ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5" 2025-08-29 14:46:26.402192 | orchestrator |  }, 2025-08-29 14:46:26.402202 | orchestrator |  { 2025-08-29 14:46:26.402213 | orchestrator |  "data": "osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791", 2025-08-29 14:46:26.402224 | orchestrator |  "data_vg": "ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791" 2025-08-29 14:46:26.402235 | orchestrator |  } 2025-08-29 14:46:26.402267 | orchestrator |  ] 2025-08-29 14:46:26.402278 | orchestrator |  } 2025-08-29 14:46:26.402289 | orchestrator | } 2025-08-29 14:46:26.402305 | orchestrator | 2025-08-29 14:46:26.402316 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:46:26.402327 | orchestrator | Friday 29 August 2025 14:46:25 +0000 (0:00:00.212) 0:00:41.791 ********* 2025-08-29 14:46:26.402338 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:46:26.402348 | orchestrator | 2025-08-29 14:46:26.402359 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:26.402378 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:46:26.402392 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:46:26.402403 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:46:26.402414 | orchestrator | 2025-08-29 14:46:26.402425 | orchestrator | 2025-08-29 14:46:26.402436 | orchestrator | 2025-08-29 14:46:26.402446 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:26.402457 | orchestrator | Friday 29 August 2025 14:46:26 +0000 (0:00:00.943) 0:00:42.734 ********* 2025-08-29 14:46:26.402468 | orchestrator | =============================================================================== 2025-08-29 14:46:26.402478 | orchestrator | Write configuration file ------------------------------------------------ 4.23s 2025-08-29 14:46:26.402489 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-08-29 14:46:26.402500 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-08-29 14:46:26.402510 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-08-29 14:46:26.402521 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.91s 2025-08-29 14:46:26.402531 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2025-08-29 14:46:26.402551 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-08-29 14:46:26.402562 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2025-08-29 14:46:26.402572 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-08-29 14:46:26.402583 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-08-29 14:46:26.402593 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-08-29 14:46:26.402604 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.68s 2025-08-29 14:46:26.402615 | orchestrator | Print DB devices -------------------------------------------------------- 0.67s 2025-08-29 14:46:26.402625 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-08-29 14:46:26.402644 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-08-29 14:46:26.796090 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-08-29 14:46:26.796215 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-08-29 14:46:26.796228 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.64s 2025-08-29 14:46:26.796239 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-08-29 14:46:26.796284 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2025-08-29 14:46:49.909330 | orchestrator | 2025-08-29 14:46:49 | INFO  | Task 607df0fc-8e31-427f-99f5-abbe9a3b67ce (sync inventory) is running in background. Output coming soon. 2025-08-29 14:47:07.719456 | orchestrator | 2025-08-29 14:46:51 | INFO  | Starting group_vars file reorganization 2025-08-29 14:47:07.719562 | orchestrator | 2025-08-29 14:46:51 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 14:47:07.719573 | orchestrator | 2025-08-29 14:46:51 | INFO  | Group_vars file reorganization completed 2025-08-29 14:47:07.719580 | orchestrator | 2025-08-29 14:46:53 | INFO  | Starting variable preparation from inventory 2025-08-29 14:47:07.719588 | orchestrator | 2025-08-29 14:46:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 14:47:07.719595 | orchestrator | 2025-08-29 14:46:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 14:47:07.719601 | orchestrator | 2025-08-29 14:46:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 14:47:07.719608 | orchestrator | 2025-08-29 14:46:54 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 14:47:07.719614 | orchestrator | 2025-08-29 14:46:54 | INFO  | Variable preparation completed 2025-08-29 14:47:07.719621 | orchestrator | 2025-08-29 14:46:55 | INFO  | Starting inventory overwrite handling 2025-08-29 14:47:07.719627 | orchestrator | 2025-08-29 14:46:55 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 14:47:07.719634 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removing group frr:children from 60-generic 2025-08-29 14:47:07.719641 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 14:47:07.719647 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 14:47:07.719653 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 14:47:07.719660 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 14:47:07.719667 | orchestrator | 2025-08-29 14:46:55 | INFO  | Handling group overwrites in 20-roles 2025-08-29 14:47:07.719673 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 14:47:07.719708 | orchestrator | 2025-08-29 14:46:55 | INFO  | Removed 6 group(s) in total 2025-08-29 14:47:07.719715 | orchestrator | 2025-08-29 14:46:55 | INFO  | Inventory overwrite handling completed 2025-08-29 14:47:07.719721 | orchestrator | 2025-08-29 14:46:56 | INFO  | Starting merge of inventory files 2025-08-29 14:47:07.719728 | orchestrator | 2025-08-29 14:46:56 | INFO  | Inventory files merged successfully 2025-08-29 14:47:07.719734 | orchestrator | 2025-08-29 14:47:00 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 14:47:07.719740 | orchestrator | 2025-08-29 14:47:06 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 14:47:07.719747 | orchestrator | [master 78ac793] 2025-08-29-14-47 2025-08-29 14:47:07.719755 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 14:47:09.541035 | orchestrator | 2025-08-29 14:47:09 | INFO  | Task c3a92636-8f27-48d2-abc9-e804c919ddbc (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 14:47:09.541130 | orchestrator | 2025-08-29 14:47:09 | INFO  | It takes a moment until task c3a92636-8f27-48d2-abc9-e804c919ddbc (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 14:47:20.180920 | orchestrator | 2025-08-29 14:47:20.180976 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:47:20.180982 | orchestrator | 2025-08-29 14:47:20.180987 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:47:20.180992 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-08-29 14:47:20.180996 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:20.181001 | orchestrator | 2025-08-29 14:47:20.181005 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:47:20.181009 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.220) 0:00:00.512 ********* 2025-08-29 14:47:20.181014 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:20.181019 | orchestrator | 2025-08-29 14:47:20.181023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181027 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.203) 0:00:00.716 ********* 2025-08-29 14:47:20.181031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:47:20.181036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:47:20.181041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:47:20.181045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:47:20.181049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:47:20.181053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:47:20.181058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:47:20.181062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:47:20.181066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:47:20.181070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:47:20.181075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:47:20.181079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:47:20.181083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:47:20.181087 | orchestrator | 2025-08-29 14:47:20.181091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181107 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.370) 0:00:01.086 ********* 2025-08-29 14:47:20.181112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181116 | orchestrator | 2025-08-29 14:47:20.181121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181133 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.375) 0:00:01.461 ********* 2025-08-29 14:47:20.181138 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181142 | orchestrator | 2025-08-29 14:47:20.181146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181150 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.189) 0:00:01.651 ********* 2025-08-29 14:47:20.181154 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181159 | orchestrator | 2025-08-29 14:47:20.181165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181169 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.208) 0:00:01.859 ********* 2025-08-29 14:47:20.181173 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181178 | orchestrator | 2025-08-29 14:47:20.181182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181186 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.171) 0:00:02.031 ********* 2025-08-29 14:47:20.181190 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181194 | orchestrator | 2025-08-29 14:47:20.181198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181203 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.184) 0:00:02.215 ********* 2025-08-29 14:47:20.181207 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181211 | orchestrator | 2025-08-29 14:47:20.181215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181219 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.172) 0:00:02.388 ********* 2025-08-29 14:47:20.181224 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181228 | orchestrator | 2025-08-29 14:47:20.181232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181236 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.192) 0:00:02.581 ********* 2025-08-29 14:47:20.181241 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181245 | orchestrator | 2025-08-29 14:47:20.181249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181254 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.183) 0:00:02.764 ********* 2025-08-29 14:47:20.181258 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78) 2025-08-29 14:47:20.181263 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78) 2025-08-29 14:47:20.181268 | orchestrator | 2025-08-29 14:47:20.181286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181291 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.396) 0:00:03.161 ********* 2025-08-29 14:47:20.181303 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714) 2025-08-29 14:47:20.181308 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714) 2025-08-29 14:47:20.181312 | orchestrator | 2025-08-29 14:47:20.181316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181321 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.402) 0:00:03.563 ********* 2025-08-29 14:47:20.181325 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c) 2025-08-29 14:47:20.181329 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c) 2025-08-29 14:47:20.181334 | orchestrator | 2025-08-29 14:47:20.181338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181347 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.537) 0:00:04.101 ********* 2025-08-29 14:47:20.181351 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34) 2025-08-29 14:47:20.181356 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34) 2025-08-29 14:47:20.181360 | orchestrator | 2025-08-29 14:47:20.181364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:20.181369 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.564) 0:00:04.666 ********* 2025-08-29 14:47:20.181373 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:47:20.181377 | orchestrator | 2025-08-29 14:47:20.181382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181386 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.588) 0:00:05.255 ********* 2025-08-29 14:47:20.181390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:47:20.181394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:47:20.181399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:47:20.181403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:47:20.181407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:47:20.181411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:47:20.181416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:47:20.181420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:47:20.181424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:47:20.181428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:47:20.181432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:47:20.181437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:47:20.181441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:47:20.181445 | orchestrator | 2025-08-29 14:47:20.181449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181454 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.436) 0:00:05.691 ********* 2025-08-29 14:47:20.181458 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181462 | orchestrator | 2025-08-29 14:47:20.181467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181471 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.179) 0:00:05.870 ********* 2025-08-29 14:47:20.181475 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181479 | orchestrator | 2025-08-29 14:47:20.181484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181488 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.185) 0:00:06.055 ********* 2025-08-29 14:47:20.181492 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181496 | orchestrator | 2025-08-29 14:47:20.181501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181505 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.194) 0:00:06.249 ********* 2025-08-29 14:47:20.181510 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181515 | orchestrator | 2025-08-29 14:47:20.181520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181524 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.190) 0:00:06.440 ********* 2025-08-29 14:47:20.181532 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181537 | orchestrator | 2025-08-29 14:47:20.181541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181547 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.165) 0:00:06.605 ********* 2025-08-29 14:47:20.181554 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181561 | orchestrator | 2025-08-29 14:47:20.181569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181577 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.186) 0:00:06.792 ********* 2025-08-29 14:47:20.181582 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:20.181587 | orchestrator | 2025-08-29 14:47:20.181591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:20.181596 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.183) 0:00:06.975 ********* 2025-08-29 14:47:20.181604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.801955 | orchestrator | 2025-08-29 14:47:28.802113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:28.802131 | orchestrator | Friday 29 August 2025 14:47:20 +0000 (0:00:00.205) 0:00:07.181 ********* 2025-08-29 14:47:28.802142 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:47:28.802154 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:47:28.802164 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:47:28.802174 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:47:28.802199 | orchestrator | 2025-08-29 14:47:28.802209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:28.802219 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:01.296) 0:00:08.477 ********* 2025-08-29 14:47:28.802230 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802240 | orchestrator | 2025-08-29 14:47:28.802249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:28.802259 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:00.233) 0:00:08.711 ********* 2025-08-29 14:47:28.802269 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802320 | orchestrator | 2025-08-29 14:47:28.802331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:28.802341 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:00.189) 0:00:08.900 ********* 2025-08-29 14:47:28.802351 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802361 | orchestrator | 2025-08-29 14:47:28.802371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:28.802381 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.225) 0:00:09.125 ********* 2025-08-29 14:47:28.802391 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802401 | orchestrator | 2025-08-29 14:47:28.802411 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:47:28.802421 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.260) 0:00:09.386 ********* 2025-08-29 14:47:28.802431 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802440 | orchestrator | 2025-08-29 14:47:28.802450 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:47:28.802460 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.159) 0:00:09.546 ********* 2025-08-29 14:47:28.802473 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dda150a8-39d5-5493-abc9-b03fdb7d62e3'}}) 2025-08-29 14:47:28.802484 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c0ce2805-49d2-5cc8-844e-183b484fa1c4'}}) 2025-08-29 14:47:28.802495 | orchestrator | 2025-08-29 14:47:28.802506 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:47:28.802516 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.190) 0:00:09.737 ********* 2025-08-29 14:47:28.802529 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'}) 2025-08-29 14:47:28.802562 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'}) 2025-08-29 14:47:28.802574 | orchestrator | 2025-08-29 14:47:28.802601 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:47:28.802612 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:01.966) 0:00:11.703 ********* 2025-08-29 14:47:28.802629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.802641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.802652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802663 | orchestrator | 2025-08-29 14:47:28.802674 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:47:28.802685 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:00.170) 0:00:11.874 ********* 2025-08-29 14:47:28.802696 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'}) 2025-08-29 14:47:28.802706 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'}) 2025-08-29 14:47:28.802717 | orchestrator | 2025-08-29 14:47:28.802728 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:47:28.802739 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:01.406) 0:00:13.281 ********* 2025-08-29 14:47:28.802750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.802761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.802772 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802782 | orchestrator | 2025-08-29 14:47:28.802792 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:47:28.802802 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.155) 0:00:13.436 ********* 2025-08-29 14:47:28.802811 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802821 | orchestrator | 2025-08-29 14:47:28.802831 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:47:28.802858 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.184) 0:00:13.620 ********* 2025-08-29 14:47:28.802869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.802879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.802888 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802898 | orchestrator | 2025-08-29 14:47:28.802908 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:47:28.802917 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.424) 0:00:14.044 ********* 2025-08-29 14:47:28.802927 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.802937 | orchestrator | 2025-08-29 14:47:28.802946 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:47:28.802956 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.172) 0:00:14.217 ********* 2025-08-29 14:47:28.802966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.802997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.803007 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803017 | orchestrator | 2025-08-29 14:47:28.803027 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:47:28.803047 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.173) 0:00:14.391 ********* 2025-08-29 14:47:28.803058 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803067 | orchestrator | 2025-08-29 14:47:28.803077 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:47:28.803087 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.208) 0:00:14.599 ********* 2025-08-29 14:47:28.803096 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.803106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.803116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803126 | orchestrator | 2025-08-29 14:47:28.803135 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:47:28.803145 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.172) 0:00:14.772 ********* 2025-08-29 14:47:28.803155 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:28.803165 | orchestrator | 2025-08-29 14:47:28.803175 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:47:28.803184 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.153) 0:00:14.925 ********* 2025-08-29 14:47:28.803194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.803209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.803219 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803228 | orchestrator | 2025-08-29 14:47:28.803238 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:47:28.803248 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.179) 0:00:15.104 ********* 2025-08-29 14:47:28.803258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.803268 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.803293 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803303 | orchestrator | 2025-08-29 14:47:28.803313 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:47:28.803322 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.163) 0:00:15.268 ********* 2025-08-29 14:47:28.803332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:28.803342 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:28.803352 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803362 | orchestrator | 2025-08-29 14:47:28.803372 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:47:28.803381 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.182) 0:00:15.451 ********* 2025-08-29 14:47:28.803391 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803401 | orchestrator | 2025-08-29 14:47:28.803410 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:47:28.803427 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.179) 0:00:15.631 ********* 2025-08-29 14:47:28.803437 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:28.803447 | orchestrator | 2025-08-29 14:47:28.803462 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:47:35.814249 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.170) 0:00:15.801 ********* 2025-08-29 14:47:35.814361 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814373 | orchestrator | 2025-08-29 14:47:35.814380 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:47:35.814388 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.141) 0:00:15.942 ********* 2025-08-29 14:47:35.814395 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:47:35.814402 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:47:35.814409 | orchestrator | } 2025-08-29 14:47:35.814416 | orchestrator | 2025-08-29 14:47:35.814422 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:47:35.814429 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.383) 0:00:16.326 ********* 2025-08-29 14:47:35.814435 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:47:35.814442 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:47:35.814449 | orchestrator | } 2025-08-29 14:47:35.814455 | orchestrator | 2025-08-29 14:47:35.814462 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:47:35.814469 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.170) 0:00:16.496 ********* 2025-08-29 14:47:35.814475 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:47:35.814482 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:47:35.814489 | orchestrator | } 2025-08-29 14:47:35.814495 | orchestrator | 2025-08-29 14:47:35.814503 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:47:35.814509 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.139) 0:00:16.635 ********* 2025-08-29 14:47:35.814516 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:35.814522 | orchestrator | 2025-08-29 14:47:35.814529 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:47:35.814538 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.735) 0:00:17.371 ********* 2025-08-29 14:47:35.814548 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:35.814560 | orchestrator | 2025-08-29 14:47:35.814571 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:47:35.814582 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.550) 0:00:17.922 ********* 2025-08-29 14:47:35.814594 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:35.814606 | orchestrator | 2025-08-29 14:47:35.814617 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:47:35.814629 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.520) 0:00:18.443 ********* 2025-08-29 14:47:35.814641 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:35.814651 | orchestrator | 2025-08-29 14:47:35.814658 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:47:35.814664 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.155) 0:00:18.599 ********* 2025-08-29 14:47:35.814671 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814678 | orchestrator | 2025-08-29 14:47:35.814684 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:47:35.814691 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.128) 0:00:18.727 ********* 2025-08-29 14:47:35.814697 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814704 | orchestrator | 2025-08-29 14:47:35.814710 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:47:35.814717 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.134) 0:00:18.862 ********* 2025-08-29 14:47:35.814724 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:47:35.814747 | orchestrator |  "vgs_report": { 2025-08-29 14:47:35.814755 | orchestrator |  "vg": [] 2025-08-29 14:47:35.814761 | orchestrator |  } 2025-08-29 14:47:35.814768 | orchestrator | } 2025-08-29 14:47:35.814775 | orchestrator | 2025-08-29 14:47:35.814782 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:47:35.814788 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.150) 0:00:19.012 ********* 2025-08-29 14:47:35.814795 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814801 | orchestrator | 2025-08-29 14:47:35.814808 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:47:35.814815 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.167) 0:00:19.180 ********* 2025-08-29 14:47:35.814821 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814828 | orchestrator | 2025-08-29 14:47:35.814836 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:47:35.814843 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.205) 0:00:19.385 ********* 2025-08-29 14:47:35.814851 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814858 | orchestrator | 2025-08-29 14:47:35.814865 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:47:35.814872 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.371) 0:00:19.757 ********* 2025-08-29 14:47:35.814880 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814887 | orchestrator | 2025-08-29 14:47:35.814894 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:47:35.814902 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.192) 0:00:19.950 ********* 2025-08-29 14:47:35.814909 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814917 | orchestrator | 2025-08-29 14:47:35.814935 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:47:35.814943 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.195) 0:00:20.145 ********* 2025-08-29 14:47:35.814950 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814958 | orchestrator | 2025-08-29 14:47:35.814965 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:47:35.814972 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.136) 0:00:20.281 ********* 2025-08-29 14:47:35.814980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.814987 | orchestrator | 2025-08-29 14:47:35.814994 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:47:35.815002 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.140) 0:00:20.422 ********* 2025-08-29 14:47:35.815009 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815017 | orchestrator | 2025-08-29 14:47:35.815024 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:47:35.815043 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.257) 0:00:20.679 ********* 2025-08-29 14:47:35.815051 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815058 | orchestrator | 2025-08-29 14:47:35.815065 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:47:35.815073 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.138) 0:00:20.817 ********* 2025-08-29 14:47:35.815080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815088 | orchestrator | 2025-08-29 14:47:35.815095 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:47:35.815102 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.167) 0:00:20.985 ********* 2025-08-29 14:47:35.815109 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815117 | orchestrator | 2025-08-29 14:47:35.815124 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:47:35.815131 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.150) 0:00:21.136 ********* 2025-08-29 14:47:35.815138 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815145 | orchestrator | 2025-08-29 14:47:35.815153 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:47:35.815164 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.146) 0:00:21.282 ********* 2025-08-29 14:47:35.815172 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815179 | orchestrator | 2025-08-29 14:47:35.815186 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:47:35.815194 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.147) 0:00:21.429 ********* 2025-08-29 14:47:35.815202 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815209 | orchestrator | 2025-08-29 14:47:35.815217 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:47:35.815224 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.138) 0:00:21.567 ********* 2025-08-29 14:47:35.815231 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:35.815239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:35.815246 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815252 | orchestrator | 2025-08-29 14:47:35.815259 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:47:35.815265 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.173) 0:00:21.741 ********* 2025-08-29 14:47:35.815272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:35.815295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:35.815303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815309 | orchestrator | 2025-08-29 14:47:35.815316 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:47:35.815322 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:00.377) 0:00:22.118 ********* 2025-08-29 14:47:35.815332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:35.815339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:35.815346 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815352 | orchestrator | 2025-08-29 14:47:35.815359 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:47:35.815365 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:00.163) 0:00:22.282 ********* 2025-08-29 14:47:35.815372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:35.815379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:35.815385 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815392 | orchestrator | 2025-08-29 14:47:35.815398 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:47:35.815405 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:00.167) 0:00:22.449 ********* 2025-08-29 14:47:35.815411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:35.815418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:35.815424 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:35.815431 | orchestrator | 2025-08-29 14:47:35.815437 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:47:35.815448 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:00.175) 0:00:22.625 ********* 2025-08-29 14:47:35.815455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:35.815466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:41.456884 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:41.456980 | orchestrator | 2025-08-29 14:47:41.457004 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:47:41.457024 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:00.189) 0:00:22.814 ********* 2025-08-29 14:47:41.457043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:41.457060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:41.457079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:41.457099 | orchestrator | 2025-08-29 14:47:41.457118 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:47:41.457137 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:00.180) 0:00:22.995 ********* 2025-08-29 14:47:41.457155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:41.457167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:41.457178 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:41.457188 | orchestrator | 2025-08-29 14:47:41.457199 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:47:41.457210 | orchestrator | Friday 29 August 2025 14:47:36 +0000 (0:00:00.167) 0:00:23.162 ********* 2025-08-29 14:47:41.457220 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:41.457232 | orchestrator | 2025-08-29 14:47:41.457242 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:47:41.457253 | orchestrator | Friday 29 August 2025 14:47:36 +0000 (0:00:00.531) 0:00:23.693 ********* 2025-08-29 14:47:41.457263 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:41.457274 | orchestrator | 2025-08-29 14:47:41.457324 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:47:41.457336 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:00.577) 0:00:24.270 ********* 2025-08-29 14:47:41.457347 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:41.457357 | orchestrator | 2025-08-29 14:47:41.457368 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:47:41.457379 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:00.158) 0:00:24.429 ********* 2025-08-29 14:47:41.457389 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'vg_name': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'}) 2025-08-29 14:47:41.457401 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'vg_name': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'}) 2025-08-29 14:47:41.457412 | orchestrator | 2025-08-29 14:47:41.457423 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:47:41.457434 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:00.166) 0:00:24.596 ********* 2025-08-29 14:47:41.457447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:41.457459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:41.457496 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:41.457508 | orchestrator | 2025-08-29 14:47:41.457520 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:47:41.457532 | orchestrator | Friday 29 August 2025 14:47:37 +0000 (0:00:00.150) 0:00:24.746 ********* 2025-08-29 14:47:41.457545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:41.457557 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:41.457569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:41.457581 | orchestrator | 2025-08-29 14:47:41.457593 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:47:41.457605 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.383) 0:00:25.130 ********* 2025-08-29 14:47:41.457617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'})  2025-08-29 14:47:41.457630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'})  2025-08-29 14:47:41.457642 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:41.457655 | orchestrator | 2025-08-29 14:47:41.457667 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:47:41.457679 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.173) 0:00:25.304 ********* 2025-08-29 14:47:41.457691 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:47:41.457704 | orchestrator |  "lvm_report": { 2025-08-29 14:47:41.457717 | orchestrator |  "lv": [ 2025-08-29 14:47:41.457729 | orchestrator |  { 2025-08-29 14:47:41.457758 | orchestrator |  "lv_name": "osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4", 2025-08-29 14:47:41.457771 | orchestrator |  "vg_name": "ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4" 2025-08-29 14:47:41.457783 | orchestrator |  }, 2025-08-29 14:47:41.457796 | orchestrator |  { 2025-08-29 14:47:41.457806 | orchestrator |  "lv_name": "osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3", 2025-08-29 14:47:41.457817 | orchestrator |  "vg_name": "ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3" 2025-08-29 14:47:41.457828 | orchestrator |  } 2025-08-29 14:47:41.457838 | orchestrator |  ], 2025-08-29 14:47:41.457849 | orchestrator |  "pv": [ 2025-08-29 14:47:41.457860 | orchestrator |  { 2025-08-29 14:47:41.457870 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:47:41.457881 | orchestrator |  "vg_name": "ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3" 2025-08-29 14:47:41.457892 | orchestrator |  }, 2025-08-29 14:47:41.457903 | orchestrator |  { 2025-08-29 14:47:41.457913 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:47:41.457924 | orchestrator |  "vg_name": "ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4" 2025-08-29 14:47:41.457935 | orchestrator |  } 2025-08-29 14:47:41.457945 | orchestrator |  ] 2025-08-29 14:47:41.457956 | orchestrator |  } 2025-08-29 14:47:41.457967 | orchestrator | } 2025-08-29 14:47:41.457978 | orchestrator | 2025-08-29 14:47:41.457989 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:47:41.457999 | orchestrator | 2025-08-29 14:47:41.458010 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:47:41.458077 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.290) 0:00:25.594 ********* 2025-08-29 14:47:41.458088 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:41.458099 | orchestrator | 2025-08-29 14:47:41.458118 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:47:41.458129 | orchestrator | Friday 29 August 2025 14:47:38 +0000 (0:00:00.290) 0:00:25.885 ********* 2025-08-29 14:47:41.458140 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:41.458150 | orchestrator | 2025-08-29 14:47:41.458161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458172 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:00.269) 0:00:26.154 ********* 2025-08-29 14:47:41.458197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:47:41.458208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:47:41.458219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:47:41.458230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:47:41.458241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:47:41.458251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:47:41.458262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:47:41.458273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:47:41.458323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:47:41.458336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:47:41.458394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:47:41.458406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:47:41.458417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:47:41.458428 | orchestrator | 2025-08-29 14:47:41.458439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458449 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:00.438) 0:00:26.593 ********* 2025-08-29 14:47:41.458460 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458471 | orchestrator | 2025-08-29 14:47:41.458482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458493 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:00.188) 0:00:26.782 ********* 2025-08-29 14:47:41.458503 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458514 | orchestrator | 2025-08-29 14:47:41.458525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458535 | orchestrator | Friday 29 August 2025 14:47:39 +0000 (0:00:00.199) 0:00:26.981 ********* 2025-08-29 14:47:41.458546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458556 | orchestrator | 2025-08-29 14:47:41.458567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458578 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.224) 0:00:27.205 ********* 2025-08-29 14:47:41.458589 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458599 | orchestrator | 2025-08-29 14:47:41.458610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458621 | orchestrator | Friday 29 August 2025 14:47:40 +0000 (0:00:00.655) 0:00:27.861 ********* 2025-08-29 14:47:41.458632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458642 | orchestrator | 2025-08-29 14:47:41.458653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458663 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:00.188) 0:00:28.049 ********* 2025-08-29 14:47:41.458674 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458685 | orchestrator | 2025-08-29 14:47:41.458695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:41.458714 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:00.195) 0:00:28.245 ********* 2025-08-29 14:47:41.458724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:41.458735 | orchestrator | 2025-08-29 14:47:41.458756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:53.065837 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:00.212) 0:00:28.457 ********* 2025-08-29 14:47:53.065929 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.065945 | orchestrator | 2025-08-29 14:47:53.065957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:53.065968 | orchestrator | Friday 29 August 2025 14:47:41 +0000 (0:00:00.210) 0:00:28.668 ********* 2025-08-29 14:47:53.065979 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399) 2025-08-29 14:47:53.065991 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399) 2025-08-29 14:47:53.066002 | orchestrator | 2025-08-29 14:47:53.066012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:53.066077 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:00.449) 0:00:29.117 ********* 2025-08-29 14:47:53.066089 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b) 2025-08-29 14:47:53.066100 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b) 2025-08-29 14:47:53.066112 | orchestrator | 2025-08-29 14:47:53.066122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:53.066133 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:00.416) 0:00:29.533 ********* 2025-08-29 14:47:53.066144 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95) 2025-08-29 14:47:53.066155 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95) 2025-08-29 14:47:53.066166 | orchestrator | 2025-08-29 14:47:53.066177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:53.066188 | orchestrator | Friday 29 August 2025 14:47:42 +0000 (0:00:00.463) 0:00:29.996 ********* 2025-08-29 14:47:53.066199 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d) 2025-08-29 14:47:53.066210 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d) 2025-08-29 14:47:53.066221 | orchestrator | 2025-08-29 14:47:53.066232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:53.066243 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.421) 0:00:30.418 ********* 2025-08-29 14:47:53.066254 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:47:53.066265 | orchestrator | 2025-08-29 14:47:53.066276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066328 | orchestrator | Friday 29 August 2025 14:47:43 +0000 (0:00:00.329) 0:00:30.747 ********* 2025-08-29 14:47:53.066342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:47:53.066368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:47:53.066381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:47:53.066393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:47:53.066406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:47:53.066418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:47:53.066431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:47:53.066462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:47:53.066475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:47:53.066487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:47:53.066499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:47:53.066512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:47:53.066524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:47:53.066537 | orchestrator | 2025-08-29 14:47:53.066550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066563 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.657) 0:00:31.405 ********* 2025-08-29 14:47:53.066575 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066588 | orchestrator | 2025-08-29 14:47:53.066598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066609 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.293) 0:00:31.698 ********* 2025-08-29 14:47:53.066620 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066630 | orchestrator | 2025-08-29 14:47:53.066641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066652 | orchestrator | Friday 29 August 2025 14:47:44 +0000 (0:00:00.255) 0:00:31.954 ********* 2025-08-29 14:47:53.066663 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066673 | orchestrator | 2025-08-29 14:47:53.066684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066695 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.217) 0:00:32.172 ********* 2025-08-29 14:47:53.066705 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066716 | orchestrator | 2025-08-29 14:47:53.066744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066755 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.191) 0:00:32.363 ********* 2025-08-29 14:47:53.066766 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066777 | orchestrator | 2025-08-29 14:47:53.066787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066798 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.198) 0:00:32.562 ********* 2025-08-29 14:47:53.066809 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066819 | orchestrator | 2025-08-29 14:47:53.066830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066841 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.215) 0:00:32.777 ********* 2025-08-29 14:47:53.066852 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066862 | orchestrator | 2025-08-29 14:47:53.066873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066884 | orchestrator | Friday 29 August 2025 14:47:45 +0000 (0:00:00.224) 0:00:33.001 ********* 2025-08-29 14:47:53.066894 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.066905 | orchestrator | 2025-08-29 14:47:53.066916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.066927 | orchestrator | Friday 29 August 2025 14:47:46 +0000 (0:00:00.219) 0:00:33.221 ********* 2025-08-29 14:47:53.066937 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:47:53.066948 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:47:53.066959 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:47:53.066970 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:47:53.066981 | orchestrator | 2025-08-29 14:47:53.066992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.067003 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:00.870) 0:00:34.091 ********* 2025-08-29 14:47:53.067020 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.067031 | orchestrator | 2025-08-29 14:47:53.067042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.067053 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:00.218) 0:00:34.309 ********* 2025-08-29 14:47:53.067064 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.067074 | orchestrator | 2025-08-29 14:47:53.067085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.067096 | orchestrator | Friday 29 August 2025 14:47:47 +0000 (0:00:00.199) 0:00:34.509 ********* 2025-08-29 14:47:53.067107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.067117 | orchestrator | 2025-08-29 14:47:53.067128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:53.067139 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:00.682) 0:00:35.191 ********* 2025-08-29 14:47:53.067150 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.067161 | orchestrator | 2025-08-29 14:47:53.067172 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:47:53.067182 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:00.208) 0:00:35.400 ********* 2025-08-29 14:47:53.067193 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.067204 | orchestrator | 2025-08-29 14:47:53.067215 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:47:53.067226 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:00.168) 0:00:35.568 ********* 2025-08-29 14:47:53.067237 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '346e0f34-2e25-5bf0-9181-de3fb405aafc'}}) 2025-08-29 14:47:53.067248 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ca3f02ac-b393-504d-bf7e-2b1a4059feca'}}) 2025-08-29 14:47:53.067259 | orchestrator | 2025-08-29 14:47:53.067270 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:47:53.067280 | orchestrator | Friday 29 August 2025 14:47:48 +0000 (0:00:00.237) 0:00:35.805 ********* 2025-08-29 14:47:53.067327 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'}) 2025-08-29 14:47:53.067340 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'}) 2025-08-29 14:47:53.067351 | orchestrator | 2025-08-29 14:47:53.067362 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:47:53.067373 | orchestrator | Friday 29 August 2025 14:47:50 +0000 (0:00:01.776) 0:00:37.582 ********* 2025-08-29 14:47:53.067383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:53.067395 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:53.067406 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:53.067416 | orchestrator | 2025-08-29 14:47:53.067427 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:47:53.067438 | orchestrator | Friday 29 August 2025 14:47:50 +0000 (0:00:00.161) 0:00:37.744 ********* 2025-08-29 14:47:53.067449 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'}) 2025-08-29 14:47:53.067460 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'}) 2025-08-29 14:47:53.067471 | orchestrator | 2025-08-29 14:47:53.067489 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:47:58.715647 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:02.318) 0:00:40.063 ********* 2025-08-29 14:47:58.715750 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.715765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.715775 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.715786 | orchestrator | 2025-08-29 14:47:58.715796 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:47:58.715806 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.166) 0:00:40.229 ********* 2025-08-29 14:47:58.715816 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.715825 | orchestrator | 2025-08-29 14:47:58.715834 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:47:58.715844 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.151) 0:00:40.381 ********* 2025-08-29 14:47:58.715854 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.715878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.715888 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.715898 | orchestrator | 2025-08-29 14:47:58.715907 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:47:58.715917 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.159) 0:00:40.540 ********* 2025-08-29 14:47:58.715926 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.715935 | orchestrator | 2025-08-29 14:47:58.715945 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:47:58.715954 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.144) 0:00:40.685 ********* 2025-08-29 14:47:58.715964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.715973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.715983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.715992 | orchestrator | 2025-08-29 14:47:58.716002 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:47:58.716011 | orchestrator | Friday 29 August 2025 14:47:53 +0000 (0:00:00.159) 0:00:40.844 ********* 2025-08-29 14:47:58.716021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716030 | orchestrator | 2025-08-29 14:47:58.716043 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:47:58.716053 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.353) 0:00:41.198 ********* 2025-08-29 14:47:58.716063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.716072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.716082 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716091 | orchestrator | 2025-08-29 14:47:58.716101 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:47:58.716110 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.144) 0:00:41.343 ********* 2025-08-29 14:47:58.716119 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:58.716129 | orchestrator | 2025-08-29 14:47:58.716139 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:47:58.716148 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.154) 0:00:41.497 ********* 2025-08-29 14:47:58.716165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.716175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.716185 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716195 | orchestrator | 2025-08-29 14:47:58.716205 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:47:58.716216 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.162) 0:00:41.660 ********* 2025-08-29 14:47:58.716227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.716238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.716248 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716259 | orchestrator | 2025-08-29 14:47:58.716269 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:47:58.716280 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.146) 0:00:41.806 ********* 2025-08-29 14:47:58.716354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:47:58.716368 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:47:58.716379 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716390 | orchestrator | 2025-08-29 14:47:58.716401 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:47:58.716412 | orchestrator | Friday 29 August 2025 14:47:54 +0000 (0:00:00.165) 0:00:41.972 ********* 2025-08-29 14:47:58.716422 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716432 | orchestrator | 2025-08-29 14:47:58.716444 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:47:58.716454 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.137) 0:00:42.110 ********* 2025-08-29 14:47:58.716464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716473 | orchestrator | 2025-08-29 14:47:58.716482 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:47:58.716492 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.140) 0:00:42.250 ********* 2025-08-29 14:47:58.716501 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716511 | orchestrator | 2025-08-29 14:47:58.716520 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:47:58.716530 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.144) 0:00:42.394 ********* 2025-08-29 14:47:58.716539 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:47:58.716549 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:47:58.716559 | orchestrator | } 2025-08-29 14:47:58.716568 | orchestrator | 2025-08-29 14:47:58.716578 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:47:58.716587 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.150) 0:00:42.545 ********* 2025-08-29 14:47:58.716597 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:47:58.716606 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:47:58.716616 | orchestrator | } 2025-08-29 14:47:58.716625 | orchestrator | 2025-08-29 14:47:58.716635 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:47:58.716644 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.157) 0:00:42.702 ********* 2025-08-29 14:47:58.716654 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:47:58.716664 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:47:58.716673 | orchestrator | } 2025-08-29 14:47:58.716690 | orchestrator | 2025-08-29 14:47:58.716700 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:47:58.716709 | orchestrator | Friday 29 August 2025 14:47:55 +0000 (0:00:00.149) 0:00:42.852 ********* 2025-08-29 14:47:58.716719 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:58.716729 | orchestrator | 2025-08-29 14:47:58.716738 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:47:58.716748 | orchestrator | Friday 29 August 2025 14:47:56 +0000 (0:00:00.725) 0:00:43.577 ********* 2025-08-29 14:47:58.716757 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:58.716767 | orchestrator | 2025-08-29 14:47:58.716781 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:47:58.716791 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:00.477) 0:00:44.055 ********* 2025-08-29 14:47:58.716800 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:58.716810 | orchestrator | 2025-08-29 14:47:58.716819 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:47:58.716829 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:00.493) 0:00:44.548 ********* 2025-08-29 14:47:58.716838 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:58.716848 | orchestrator | 2025-08-29 14:47:58.716857 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:47:58.716867 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:00.152) 0:00:44.701 ********* 2025-08-29 14:47:58.716876 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716886 | orchestrator | 2025-08-29 14:47:58.716896 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:47:58.716905 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:00.129) 0:00:44.831 ********* 2025-08-29 14:47:58.716915 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.716924 | orchestrator | 2025-08-29 14:47:58.716933 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:47:58.716943 | orchestrator | Friday 29 August 2025 14:47:57 +0000 (0:00:00.113) 0:00:44.944 ********* 2025-08-29 14:47:58.716952 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:47:58.716962 | orchestrator |  "vgs_report": { 2025-08-29 14:47:58.716973 | orchestrator |  "vg": [] 2025-08-29 14:47:58.716983 | orchestrator |  } 2025-08-29 14:47:58.716992 | orchestrator | } 2025-08-29 14:47:58.717002 | orchestrator | 2025-08-29 14:47:58.717012 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:47:58.717021 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.203) 0:00:45.147 ********* 2025-08-29 14:47:58.717031 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.717040 | orchestrator | 2025-08-29 14:47:58.717050 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:47:58.717059 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.127) 0:00:45.275 ********* 2025-08-29 14:47:58.717069 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.717078 | orchestrator | 2025-08-29 14:47:58.717088 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:47:58.717098 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.141) 0:00:45.416 ********* 2025-08-29 14:47:58.717107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.717117 | orchestrator | 2025-08-29 14:47:58.717126 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:47:58.717136 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.141) 0:00:45.558 ********* 2025-08-29 14:47:58.717145 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:58.717155 | orchestrator | 2025-08-29 14:47:58.717165 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:47:58.717180 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.153) 0:00:45.711 ********* 2025-08-29 14:48:03.605053 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605165 | orchestrator | 2025-08-29 14:48:03.605192 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:48:03.605240 | orchestrator | Friday 29 August 2025 14:47:58 +0000 (0:00:00.144) 0:00:45.856 ********* 2025-08-29 14:48:03.605252 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605263 | orchestrator | 2025-08-29 14:48:03.605274 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:48:03.605285 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.333) 0:00:46.190 ********* 2025-08-29 14:48:03.605319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605330 | orchestrator | 2025-08-29 14:48:03.605341 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:48:03.605352 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.157) 0:00:46.347 ********* 2025-08-29 14:48:03.605363 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605373 | orchestrator | 2025-08-29 14:48:03.605384 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:48:03.605395 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.160) 0:00:46.507 ********* 2025-08-29 14:48:03.605406 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605416 | orchestrator | 2025-08-29 14:48:03.605427 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:48:03.605438 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.143) 0:00:46.651 ********* 2025-08-29 14:48:03.605449 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605459 | orchestrator | 2025-08-29 14:48:03.605470 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:48:03.605481 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.147) 0:00:46.799 ********* 2025-08-29 14:48:03.605491 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605502 | orchestrator | 2025-08-29 14:48:03.605513 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:48:03.605524 | orchestrator | Friday 29 August 2025 14:47:59 +0000 (0:00:00.154) 0:00:46.953 ********* 2025-08-29 14:48:03.605534 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605545 | orchestrator | 2025-08-29 14:48:03.605556 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:48:03.605567 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.142) 0:00:47.096 ********* 2025-08-29 14:48:03.605577 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605588 | orchestrator | 2025-08-29 14:48:03.605601 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:48:03.605613 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.146) 0:00:47.242 ********* 2025-08-29 14:48:03.605626 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605638 | orchestrator | 2025-08-29 14:48:03.605650 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:48:03.605662 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.140) 0:00:47.383 ********* 2025-08-29 14:48:03.605690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.605704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.605717 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605730 | orchestrator | 2025-08-29 14:48:03.605743 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:48:03.605756 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.178) 0:00:47.562 ********* 2025-08-29 14:48:03.605768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.605782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.605802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605814 | orchestrator | 2025-08-29 14:48:03.605827 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:48:03.605839 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.150) 0:00:47.712 ********* 2025-08-29 14:48:03.605851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.605864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.605876 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605888 | orchestrator | 2025-08-29 14:48:03.605900 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:48:03.605913 | orchestrator | Friday 29 August 2025 14:48:00 +0000 (0:00:00.171) 0:00:47.884 ********* 2025-08-29 14:48:03.605926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.605938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.605949 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.605960 | orchestrator | 2025-08-29 14:48:03.605971 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:48:03.605998 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.361) 0:00:48.246 ********* 2025-08-29 14:48:03.606010 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.606111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.606124 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.606135 | orchestrator | 2025-08-29 14:48:03.606146 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:48:03.606156 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.169) 0:00:48.416 ********* 2025-08-29 14:48:03.606167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.606178 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.606189 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.606199 | orchestrator | 2025-08-29 14:48:03.606210 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:48:03.606221 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.173) 0:00:48.589 ********* 2025-08-29 14:48:03.606232 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.606243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.606254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.606265 | orchestrator | 2025-08-29 14:48:03.606275 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:48:03.606286 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.189) 0:00:48.779 ********* 2025-08-29 14:48:03.606322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.606334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.606354 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.606365 | orchestrator | 2025-08-29 14:48:03.606376 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:48:03.606424 | orchestrator | Friday 29 August 2025 14:48:01 +0000 (0:00:00.156) 0:00:48.936 ********* 2025-08-29 14:48:03.606437 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:03.606448 | orchestrator | 2025-08-29 14:48:03.606459 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:48:03.606470 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.518) 0:00:49.454 ********* 2025-08-29 14:48:03.606481 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:03.606491 | orchestrator | 2025-08-29 14:48:03.606502 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:48:03.606513 | orchestrator | Friday 29 August 2025 14:48:02 +0000 (0:00:00.464) 0:00:49.918 ********* 2025-08-29 14:48:03.606524 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:03.606535 | orchestrator | 2025-08-29 14:48:03.606546 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:48:03.606557 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.163) 0:00:50.082 ********* 2025-08-29 14:48:03.606568 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'vg_name': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'}) 2025-08-29 14:48:03.606579 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'vg_name': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'}) 2025-08-29 14:48:03.606590 | orchestrator | 2025-08-29 14:48:03.606601 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:48:03.606612 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.175) 0:00:50.257 ********* 2025-08-29 14:48:03.606623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.606634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.606644 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:03.606655 | orchestrator | 2025-08-29 14:48:03.606666 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:48:03.606677 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.170) 0:00:50.428 ********* 2025-08-29 14:48:03.606688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:03.606699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:03.606720 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:09.420019 | orchestrator | 2025-08-29 14:48:09.420123 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:48:09.420139 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.174) 0:00:50.602 ********* 2025-08-29 14:48:09.420151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'})  2025-08-29 14:48:09.420164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'})  2025-08-29 14:48:09.420175 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:09.420187 | orchestrator | 2025-08-29 14:48:09.420198 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:48:09.420209 | orchestrator | Friday 29 August 2025 14:48:03 +0000 (0:00:00.172) 0:00:50.775 ********* 2025-08-29 14:48:09.420247 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:48:09.420259 | orchestrator |  "lvm_report": { 2025-08-29 14:48:09.420272 | orchestrator |  "lv": [ 2025-08-29 14:48:09.420284 | orchestrator |  { 2025-08-29 14:48:09.420349 | orchestrator |  "lv_name": "osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc", 2025-08-29 14:48:09.420363 | orchestrator |  "vg_name": "ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc" 2025-08-29 14:48:09.420374 | orchestrator |  }, 2025-08-29 14:48:09.420385 | orchestrator |  { 2025-08-29 14:48:09.420396 | orchestrator |  "lv_name": "osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca", 2025-08-29 14:48:09.420406 | orchestrator |  "vg_name": "ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca" 2025-08-29 14:48:09.420417 | orchestrator |  } 2025-08-29 14:48:09.420428 | orchestrator |  ], 2025-08-29 14:48:09.420438 | orchestrator |  "pv": [ 2025-08-29 14:48:09.420449 | orchestrator |  { 2025-08-29 14:48:09.420460 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:48:09.420471 | orchestrator |  "vg_name": "ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc" 2025-08-29 14:48:09.420482 | orchestrator |  }, 2025-08-29 14:48:09.420492 | orchestrator |  { 2025-08-29 14:48:09.420503 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:48:09.420514 | orchestrator |  "vg_name": "ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca" 2025-08-29 14:48:09.420525 | orchestrator |  } 2025-08-29 14:48:09.420535 | orchestrator |  ] 2025-08-29 14:48:09.420546 | orchestrator |  } 2025-08-29 14:48:09.420560 | orchestrator | } 2025-08-29 14:48:09.420573 | orchestrator | 2025-08-29 14:48:09.420585 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:48:09.420597 | orchestrator | 2025-08-29 14:48:09.420609 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:48:09.420621 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:00.492) 0:00:51.268 ********* 2025-08-29 14:48:09.420633 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:48:09.420646 | orchestrator | 2025-08-29 14:48:09.420673 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:48:09.420687 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:00.257) 0:00:51.525 ********* 2025-08-29 14:48:09.420699 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:09.420711 | orchestrator | 2025-08-29 14:48:09.420724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.420737 | orchestrator | Friday 29 August 2025 14:48:04 +0000 (0:00:00.234) 0:00:51.759 ********* 2025-08-29 14:48:09.420749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:48:09.420761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:48:09.420773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:48:09.420785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:48:09.420797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:48:09.420809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:48:09.420821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:48:09.420834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:48:09.420845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:48:09.420858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:48:09.420870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:48:09.420889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:48:09.420903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:48:09.420916 | orchestrator | 2025-08-29 14:48:09.420927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.420937 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.421) 0:00:52.180 ********* 2025-08-29 14:48:09.420948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.420959 | orchestrator | 2025-08-29 14:48:09.420974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.420985 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.204) 0:00:52.385 ********* 2025-08-29 14:48:09.420996 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421006 | orchestrator | 2025-08-29 14:48:09.421017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421047 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.208) 0:00:52.594 ********* 2025-08-29 14:48:09.421058 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421069 | orchestrator | 2025-08-29 14:48:09.421079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421090 | orchestrator | Friday 29 August 2025 14:48:05 +0000 (0:00:00.208) 0:00:52.802 ********* 2025-08-29 14:48:09.421101 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421111 | orchestrator | 2025-08-29 14:48:09.421122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421132 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.206) 0:00:53.009 ********* 2025-08-29 14:48:09.421143 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421153 | orchestrator | 2025-08-29 14:48:09.421164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421175 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.197) 0:00:53.207 ********* 2025-08-29 14:48:09.421185 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421196 | orchestrator | 2025-08-29 14:48:09.421206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421217 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.468) 0:00:53.676 ********* 2025-08-29 14:48:09.421227 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421238 | orchestrator | 2025-08-29 14:48:09.421248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421259 | orchestrator | Friday 29 August 2025 14:48:06 +0000 (0:00:00.185) 0:00:53.861 ********* 2025-08-29 14:48:09.421270 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:09.421280 | orchestrator | 2025-08-29 14:48:09.421290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421320 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:00.179) 0:00:54.041 ********* 2025-08-29 14:48:09.421331 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661) 2025-08-29 14:48:09.421343 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661) 2025-08-29 14:48:09.421354 | orchestrator | 2025-08-29 14:48:09.421365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421375 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:00.396) 0:00:54.438 ********* 2025-08-29 14:48:09.421386 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9) 2025-08-29 14:48:09.421397 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9) 2025-08-29 14:48:09.421407 | orchestrator | 2025-08-29 14:48:09.421418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421429 | orchestrator | Friday 29 August 2025 14:48:07 +0000 (0:00:00.405) 0:00:54.843 ********* 2025-08-29 14:48:09.421459 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b) 2025-08-29 14:48:09.421477 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b) 2025-08-29 14:48:09.421488 | orchestrator | 2025-08-29 14:48:09.421499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421509 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.380) 0:00:55.224 ********* 2025-08-29 14:48:09.421520 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598) 2025-08-29 14:48:09.421530 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598) 2025-08-29 14:48:09.421541 | orchestrator | 2025-08-29 14:48:09.421551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:09.421562 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.419) 0:00:55.643 ********* 2025-08-29 14:48:09.421573 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:48:09.421583 | orchestrator | 2025-08-29 14:48:09.421594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:09.421604 | orchestrator | Friday 29 August 2025 14:48:08 +0000 (0:00:00.333) 0:00:55.976 ********* 2025-08-29 14:48:09.421615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:48:09.421626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:48:09.421636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:48:09.421647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:48:09.421657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:48:09.421668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:48:09.421678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:48:09.421689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:48:09.421699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:48:09.421709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:48:09.421720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:48:09.421738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:48:18.282832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:48:18.282922 | orchestrator | 2025-08-29 14:48:18.282938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.282950 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.438) 0:00:56.415 ********* 2025-08-29 14:48:18.282961 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.282973 | orchestrator | 2025-08-29 14:48:18.282983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.282994 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.208) 0:00:56.623 ********* 2025-08-29 14:48:18.283005 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283015 | orchestrator | 2025-08-29 14:48:18.283026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283037 | orchestrator | Friday 29 August 2025 14:48:09 +0000 (0:00:00.182) 0:00:56.805 ********* 2025-08-29 14:48:18.283048 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283058 | orchestrator | 2025-08-29 14:48:18.283069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283079 | orchestrator | Friday 29 August 2025 14:48:10 +0000 (0:00:00.474) 0:00:57.280 ********* 2025-08-29 14:48:18.283111 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283123 | orchestrator | 2025-08-29 14:48:18.283133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283144 | orchestrator | Friday 29 August 2025 14:48:10 +0000 (0:00:00.189) 0:00:57.469 ********* 2025-08-29 14:48:18.283155 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283165 | orchestrator | 2025-08-29 14:48:18.283176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283187 | orchestrator | Friday 29 August 2025 14:48:10 +0000 (0:00:00.203) 0:00:57.673 ********* 2025-08-29 14:48:18.283197 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283208 | orchestrator | 2025-08-29 14:48:18.283218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283229 | orchestrator | Friday 29 August 2025 14:48:10 +0000 (0:00:00.214) 0:00:57.887 ********* 2025-08-29 14:48:18.283240 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283250 | orchestrator | 2025-08-29 14:48:18.283261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283271 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:00.201) 0:00:58.088 ********* 2025-08-29 14:48:18.283282 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283292 | orchestrator | 2025-08-29 14:48:18.283356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283368 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:00.188) 0:00:58.277 ********* 2025-08-29 14:48:18.283379 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:48:18.283392 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:48:18.283404 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:48:18.283416 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:48:18.283427 | orchestrator | 2025-08-29 14:48:18.283439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283451 | orchestrator | Friday 29 August 2025 14:48:11 +0000 (0:00:00.627) 0:00:58.904 ********* 2025-08-29 14:48:18.283462 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283473 | orchestrator | 2025-08-29 14:48:18.283485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283496 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:00.197) 0:00:59.102 ********* 2025-08-29 14:48:18.283508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283519 | orchestrator | 2025-08-29 14:48:18.283531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283543 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:00.193) 0:00:59.295 ********* 2025-08-29 14:48:18.283555 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283567 | orchestrator | 2025-08-29 14:48:18.283579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:18.283590 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:00.212) 0:00:59.508 ********* 2025-08-29 14:48:18.283602 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283613 | orchestrator | 2025-08-29 14:48:18.283625 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:48:18.283636 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:00.179) 0:00:59.687 ********* 2025-08-29 14:48:18.283648 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283660 | orchestrator | 2025-08-29 14:48:18.283671 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:48:18.283683 | orchestrator | Friday 29 August 2025 14:48:12 +0000 (0:00:00.275) 0:00:59.963 ********* 2025-08-29 14:48:18.283694 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}}) 2025-08-29 14:48:18.283708 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd9c5dbd3-dfd6-59a8-a565-791b79996791'}}) 2025-08-29 14:48:18.283727 | orchestrator | 2025-08-29 14:48:18.283738 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:48:18.283749 | orchestrator | Friday 29 August 2025 14:48:13 +0000 (0:00:00.210) 0:01:00.174 ********* 2025-08-29 14:48:18.283761 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}) 2025-08-29 14:48:18.283772 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'}) 2025-08-29 14:48:18.283783 | orchestrator | 2025-08-29 14:48:18.283794 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:48:18.283820 | orchestrator | Friday 29 August 2025 14:48:15 +0000 (0:00:01.848) 0:01:02.022 ********* 2025-08-29 14:48:18.283832 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:18.283843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:18.283854 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.283865 | orchestrator | 2025-08-29 14:48:18.283876 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:48:18.283887 | orchestrator | Friday 29 August 2025 14:48:15 +0000 (0:00:00.167) 0:01:02.190 ********* 2025-08-29 14:48:18.283897 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}) 2025-08-29 14:48:18.283923 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'}) 2025-08-29 14:48:18.283935 | orchestrator | 2025-08-29 14:48:18.283947 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:48:18.283958 | orchestrator | Friday 29 August 2025 14:48:16 +0000 (0:00:01.400) 0:01:03.590 ********* 2025-08-29 14:48:18.283969 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:18.283980 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:18.283991 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284002 | orchestrator | 2025-08-29 14:48:18.284012 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:48:18.284023 | orchestrator | Friday 29 August 2025 14:48:16 +0000 (0:00:00.165) 0:01:03.756 ********* 2025-08-29 14:48:18.284034 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284045 | orchestrator | 2025-08-29 14:48:18.284055 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:48:18.284066 | orchestrator | Friday 29 August 2025 14:48:16 +0000 (0:00:00.140) 0:01:03.897 ********* 2025-08-29 14:48:18.284077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:18.284092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:18.284104 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284115 | orchestrator | 2025-08-29 14:48:18.284125 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:48:18.284136 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.175) 0:01:04.072 ********* 2025-08-29 14:48:18.284147 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284157 | orchestrator | 2025-08-29 14:48:18.284168 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:48:18.284185 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.153) 0:01:04.226 ********* 2025-08-29 14:48:18.284196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:18.284207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:18.284218 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284229 | orchestrator | 2025-08-29 14:48:18.284239 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:48:18.284250 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.177) 0:01:04.403 ********* 2025-08-29 14:48:18.284261 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284272 | orchestrator | 2025-08-29 14:48:18.284282 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:48:18.284293 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.147) 0:01:04.551 ********* 2025-08-29 14:48:18.284329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:18.284341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:18.284351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:18.284362 | orchestrator | 2025-08-29 14:48:18.284373 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:48:18.284384 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.154) 0:01:04.705 ********* 2025-08-29 14:48:18.284395 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:18.284405 | orchestrator | 2025-08-29 14:48:18.284416 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:48:18.284427 | orchestrator | Friday 29 August 2025 14:48:17 +0000 (0:00:00.171) 0:01:04.876 ********* 2025-08-29 14:48:18.284445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:24.630932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:24.630982 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.630989 | orchestrator | 2025-08-29 14:48:24.630993 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:48:24.630998 | orchestrator | Friday 29 August 2025 14:48:18 +0000 (0:00:00.406) 0:01:05.282 ********* 2025-08-29 14:48:24.631002 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:24.631006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:24.631010 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631013 | orchestrator | 2025-08-29 14:48:24.631017 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:48:24.631021 | orchestrator | Friday 29 August 2025 14:48:18 +0000 (0:00:00.172) 0:01:05.455 ********* 2025-08-29 14:48:24.631025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:24.631029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:24.631033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631037 | orchestrator | 2025-08-29 14:48:24.631049 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:48:24.631053 | orchestrator | Friday 29 August 2025 14:48:18 +0000 (0:00:00.214) 0:01:05.669 ********* 2025-08-29 14:48:24.631057 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631061 | orchestrator | 2025-08-29 14:48:24.631064 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:48:24.631068 | orchestrator | Friday 29 August 2025 14:48:18 +0000 (0:00:00.187) 0:01:05.857 ********* 2025-08-29 14:48:24.631072 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631075 | orchestrator | 2025-08-29 14:48:24.631079 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:48:24.631083 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:00.171) 0:01:06.028 ********* 2025-08-29 14:48:24.631086 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631090 | orchestrator | 2025-08-29 14:48:24.631093 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:48:24.631104 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:00.152) 0:01:06.181 ********* 2025-08-29 14:48:24.631108 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:48:24.631112 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:48:24.631116 | orchestrator | } 2025-08-29 14:48:24.631120 | orchestrator | 2025-08-29 14:48:24.631123 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:48:24.631127 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:00.157) 0:01:06.338 ********* 2025-08-29 14:48:24.631131 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:48:24.631134 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:48:24.631138 | orchestrator | } 2025-08-29 14:48:24.631142 | orchestrator | 2025-08-29 14:48:24.631146 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:48:24.631149 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:00.161) 0:01:06.500 ********* 2025-08-29 14:48:24.631153 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:48:24.631157 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:48:24.631161 | orchestrator | } 2025-08-29 14:48:24.631165 | orchestrator | 2025-08-29 14:48:24.631168 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:48:24.631172 | orchestrator | Friday 29 August 2025 14:48:19 +0000 (0:00:00.183) 0:01:06.684 ********* 2025-08-29 14:48:24.631176 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:24.631180 | orchestrator | 2025-08-29 14:48:24.631183 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:48:24.631187 | orchestrator | Friday 29 August 2025 14:48:20 +0000 (0:00:00.516) 0:01:07.201 ********* 2025-08-29 14:48:24.631191 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:24.631194 | orchestrator | 2025-08-29 14:48:24.631198 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:48:24.631202 | orchestrator | Friday 29 August 2025 14:48:20 +0000 (0:00:00.535) 0:01:07.736 ********* 2025-08-29 14:48:24.631205 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:24.631209 | orchestrator | 2025-08-29 14:48:24.631213 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:48:24.631216 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:00.484) 0:01:08.220 ********* 2025-08-29 14:48:24.631220 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:24.631224 | orchestrator | 2025-08-29 14:48:24.631227 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:48:24.631231 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:00.370) 0:01:08.591 ********* 2025-08-29 14:48:24.631235 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631238 | orchestrator | 2025-08-29 14:48:24.631242 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:48:24.631246 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:00.131) 0:01:08.722 ********* 2025-08-29 14:48:24.631249 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631256 | orchestrator | 2025-08-29 14:48:24.631260 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:48:24.631263 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:00.116) 0:01:08.839 ********* 2025-08-29 14:48:24.631267 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:48:24.631271 | orchestrator |  "vgs_report": { 2025-08-29 14:48:24.631275 | orchestrator |  "vg": [] 2025-08-29 14:48:24.631286 | orchestrator |  } 2025-08-29 14:48:24.631290 | orchestrator | } 2025-08-29 14:48:24.631294 | orchestrator | 2025-08-29 14:48:24.631316 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:48:24.631320 | orchestrator | Friday 29 August 2025 14:48:21 +0000 (0:00:00.138) 0:01:08.978 ********* 2025-08-29 14:48:24.631324 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631328 | orchestrator | 2025-08-29 14:48:24.631331 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:48:24.631335 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.143) 0:01:09.121 ********* 2025-08-29 14:48:24.631339 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631342 | orchestrator | 2025-08-29 14:48:24.631346 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:48:24.631350 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.149) 0:01:09.271 ********* 2025-08-29 14:48:24.631354 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631357 | orchestrator | 2025-08-29 14:48:24.631361 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:48:24.631365 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.138) 0:01:09.409 ********* 2025-08-29 14:48:24.631373 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631377 | orchestrator | 2025-08-29 14:48:24.631381 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:48:24.631384 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.128) 0:01:09.538 ********* 2025-08-29 14:48:24.631388 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631392 | orchestrator | 2025-08-29 14:48:24.631396 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:48:24.631399 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.151) 0:01:09.690 ********* 2025-08-29 14:48:24.631403 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631407 | orchestrator | 2025-08-29 14:48:24.631410 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:48:24.631414 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.137) 0:01:09.827 ********* 2025-08-29 14:48:24.631418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631421 | orchestrator | 2025-08-29 14:48:24.631425 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:48:24.631429 | orchestrator | Friday 29 August 2025 14:48:22 +0000 (0:00:00.142) 0:01:09.969 ********* 2025-08-29 14:48:24.631433 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631436 | orchestrator | 2025-08-29 14:48:24.631440 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:48:24.631444 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:00.138) 0:01:10.107 ********* 2025-08-29 14:48:24.631447 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631451 | orchestrator | 2025-08-29 14:48:24.631455 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:48:24.631459 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:00.341) 0:01:10.449 ********* 2025-08-29 14:48:24.631464 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631490 | orchestrator | 2025-08-29 14:48:24.631495 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:48:24.631499 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:00.137) 0:01:10.586 ********* 2025-08-29 14:48:24.631503 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631507 | orchestrator | 2025-08-29 14:48:24.631510 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:48:24.631539 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:00.142) 0:01:10.729 ********* 2025-08-29 14:48:24.631544 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631548 | orchestrator | 2025-08-29 14:48:24.631552 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:48:24.631557 | orchestrator | Friday 29 August 2025 14:48:23 +0000 (0:00:00.155) 0:01:10.885 ********* 2025-08-29 14:48:24.631561 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631565 | orchestrator | 2025-08-29 14:48:24.631569 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:48:24.631573 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.147) 0:01:11.033 ********* 2025-08-29 14:48:24.631577 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631581 | orchestrator | 2025-08-29 14:48:24.631586 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:48:24.631590 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.142) 0:01:11.176 ********* 2025-08-29 14:48:24.631594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:24.631599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:24.631603 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631607 | orchestrator | 2025-08-29 14:48:24.631611 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:48:24.631615 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.159) 0:01:11.335 ********* 2025-08-29 14:48:24.631620 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:24.631624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:24.631628 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:24.631632 | orchestrator | 2025-08-29 14:48:24.631636 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:48:24.631640 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.162) 0:01:11.498 ********* 2025-08-29 14:48:24.631648 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.676622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.676723 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.676742 | orchestrator | 2025-08-29 14:48:27.676757 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:48:27.676772 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.135) 0:01:11.633 ********* 2025-08-29 14:48:27.676786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.676799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.676813 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.676827 | orchestrator | 2025-08-29 14:48:27.676840 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:48:27.676854 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.140) 0:01:11.774 ********* 2025-08-29 14:48:27.676867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.676908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.676922 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.676935 | orchestrator | 2025-08-29 14:48:27.676948 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:48:27.676962 | orchestrator | Friday 29 August 2025 14:48:24 +0000 (0:00:00.158) 0:01:11.933 ********* 2025-08-29 14:48:27.676975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.676989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.677003 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.677017 | orchestrator | 2025-08-29 14:48:27.677030 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:48:27.677043 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:00.151) 0:01:12.084 ********* 2025-08-29 14:48:27.677057 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.677070 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.677084 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.677097 | orchestrator | 2025-08-29 14:48:27.677109 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:48:27.677123 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:00.390) 0:01:12.475 ********* 2025-08-29 14:48:27.677137 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.677151 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.677165 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.677179 | orchestrator | 2025-08-29 14:48:27.677195 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:48:27.677209 | orchestrator | Friday 29 August 2025 14:48:25 +0000 (0:00:00.161) 0:01:12.637 ********* 2025-08-29 14:48:27.677224 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:27.677238 | orchestrator | 2025-08-29 14:48:27.677252 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:48:27.677265 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:00.517) 0:01:13.154 ********* 2025-08-29 14:48:27.677278 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:27.677291 | orchestrator | 2025-08-29 14:48:27.677323 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:48:27.677337 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:00.533) 0:01:13.688 ********* 2025-08-29 14:48:27.677350 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:27.677364 | orchestrator | 2025-08-29 14:48:27.677377 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:48:27.677390 | orchestrator | Friday 29 August 2025 14:48:26 +0000 (0:00:00.151) 0:01:13.839 ********* 2025-08-29 14:48:27.677404 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'vg_name': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}) 2025-08-29 14:48:27.677419 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'vg_name': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'}) 2025-08-29 14:48:27.677434 | orchestrator | 2025-08-29 14:48:27.677448 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:48:27.677479 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:00.172) 0:01:14.011 ********* 2025-08-29 14:48:27.677524 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.677539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.677554 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.677568 | orchestrator | 2025-08-29 14:48:27.677583 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:48:27.677598 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:00.157) 0:01:14.169 ********* 2025-08-29 14:48:27.677613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.677628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.677642 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.677656 | orchestrator | 2025-08-29 14:48:27.677670 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:48:27.677684 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:00.160) 0:01:14.329 ********* 2025-08-29 14:48:27.677698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'})  2025-08-29 14:48:27.677730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'})  2025-08-29 14:48:27.677743 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:27.677756 | orchestrator | 2025-08-29 14:48:27.677769 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:48:27.677784 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:00.168) 0:01:14.497 ********* 2025-08-29 14:48:27.677797 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:48:27.677810 | orchestrator |  "lvm_report": { 2025-08-29 14:48:27.677824 | orchestrator |  "lv": [ 2025-08-29 14:48:27.677838 | orchestrator |  { 2025-08-29 14:48:27.677851 | orchestrator |  "lv_name": "osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5", 2025-08-29 14:48:27.677865 | orchestrator |  "vg_name": "ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5" 2025-08-29 14:48:27.677878 | orchestrator |  }, 2025-08-29 14:48:27.677897 | orchestrator |  { 2025-08-29 14:48:27.677911 | orchestrator |  "lv_name": "osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791", 2025-08-29 14:48:27.677925 | orchestrator |  "vg_name": "ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791" 2025-08-29 14:48:27.677938 | orchestrator |  } 2025-08-29 14:48:27.677952 | orchestrator |  ], 2025-08-29 14:48:27.677966 | orchestrator |  "pv": [ 2025-08-29 14:48:27.677981 | orchestrator |  { 2025-08-29 14:48:27.677995 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:48:27.678009 | orchestrator |  "vg_name": "ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5" 2025-08-29 14:48:27.678124 | orchestrator |  }, 2025-08-29 14:48:27.678139 | orchestrator |  { 2025-08-29 14:48:27.678154 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:48:27.678168 | orchestrator |  "vg_name": "ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791" 2025-08-29 14:48:27.678182 | orchestrator |  } 2025-08-29 14:48:27.678195 | orchestrator |  ] 2025-08-29 14:48:27.678208 | orchestrator |  } 2025-08-29 14:48:27.678222 | orchestrator | } 2025-08-29 14:48:27.678235 | orchestrator | 2025-08-29 14:48:27.678249 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:27.678263 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:48:27.678287 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:48:27.678318 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:48:27.678334 | orchestrator | 2025-08-29 14:48:27.678348 | orchestrator | 2025-08-29 14:48:27.678362 | orchestrator | 2025-08-29 14:48:27.678376 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:27.678389 | orchestrator | Friday 29 August 2025 14:48:27 +0000 (0:00:00.151) 0:01:14.649 ********* 2025-08-29 14:48:27.678403 | orchestrator | =============================================================================== 2025-08-29 14:48:27.678417 | orchestrator | Create block VGs -------------------------------------------------------- 5.59s 2025-08-29 14:48:27.678431 | orchestrator | Create block LVs -------------------------------------------------------- 5.13s 2025-08-29 14:48:27.678444 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.98s 2025-08-29 14:48:27.678458 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-08-29 14:48:27.678471 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2025-08-29 14:48:27.678485 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-08-29 14:48:27.678499 | orchestrator | Add known partitions to the list of available block devices ------------- 1.53s 2025-08-29 14:48:27.678527 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.50s 2025-08-29 14:48:27.678552 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2025-08-29 14:48:28.021847 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-08-29 14:48:28.021929 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2025-08-29 14:48:28.021940 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-08-29 14:48:28.021948 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2025-08-29 14:48:28.021956 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.76s 2025-08-29 14:48:28.021964 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.76s 2025-08-29 14:48:28.021972 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.75s 2025-08-29 14:48:28.021980 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.72s 2025-08-29 14:48:28.021988 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.71s 2025-08-29 14:48:28.021996 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-08-29 14:48:28.022004 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.69s 2025-08-29 14:48:40.231261 | orchestrator | 2025-08-29 14:48:40 | INFO  | Task 6b0782a5-b9f9-450c-a3c1-4dc511601e43 (facts) was prepared for execution. 2025-08-29 14:48:40.231384 | orchestrator | 2025-08-29 14:48:40 | INFO  | It takes a moment until task 6b0782a5-b9f9-450c-a3c1-4dc511601e43 (facts) has been started and output is visible here. 2025-08-29 14:48:53.131423 | orchestrator | 2025-08-29 14:48:53.131558 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:48:53.131584 | orchestrator | 2025-08-29 14:48:53.131602 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:48:53.131619 | orchestrator | Friday 29 August 2025 14:48:44 +0000 (0:00:00.280) 0:00:00.280 ********* 2025-08-29 14:48:53.131630 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:53.131641 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:53.131651 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:53.131690 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:53.131707 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:53.131722 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:53.131738 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:53.131754 | orchestrator | 2025-08-29 14:48:53.131770 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:48:53.131787 | orchestrator | Friday 29 August 2025 14:48:45 +0000 (0:00:01.138) 0:00:01.419 ********* 2025-08-29 14:48:53.131803 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:48:53.131835 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:48:53.131845 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:48:53.131856 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:48:53.131870 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:53.131887 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:53.131904 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:53.131922 | orchestrator | 2025-08-29 14:48:53.131940 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:48:53.131959 | orchestrator | 2025-08-29 14:48:53.131978 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:48:53.131996 | orchestrator | Friday 29 August 2025 14:48:46 +0000 (0:00:01.100) 0:00:02.520 ********* 2025-08-29 14:48:53.132013 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:48:53.132032 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:48:53.132050 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:48:53.132067 | orchestrator | ok: [testbed-manager] 2025-08-29 14:48:53.132085 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:48:53.132105 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:48:53.132124 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:53.132171 | orchestrator | 2025-08-29 14:48:53.132189 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:48:53.132207 | orchestrator | 2025-08-29 14:48:53.132225 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:48:53.132245 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:05.647) 0:00:08.167 ********* 2025-08-29 14:48:53.132264 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:48:53.132282 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:48:53.132300 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:48:53.132348 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:48:53.132367 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:53.132385 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:48:53.132403 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:48:53.132421 | orchestrator | 2025-08-29 14:48:53.132439 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:48:53.132458 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132478 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132496 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132514 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132532 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132551 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132569 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:48:53.132587 | orchestrator | 2025-08-29 14:48:53.132605 | orchestrator | 2025-08-29 14:48:53.132640 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:48:53.132658 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.523) 0:00:08.691 ********* 2025-08-29 14:48:53.132676 | orchestrator | =============================================================================== 2025-08-29 14:48:53.132692 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.65s 2025-08-29 14:48:53.132727 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2025-08-29 14:48:53.132746 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-08-29 14:48:53.132764 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-08-29 14:49:05.319052 | orchestrator | 2025-08-29 14:49:05 | INFO  | Task c436c008-b02f-465a-a416-ea8d37142020 (frr) was prepared for execution. 2025-08-29 14:49:05.319948 | orchestrator | 2025-08-29 14:49:05 | INFO  | It takes a moment until task c436c008-b02f-465a-a416-ea8d37142020 (frr) has been started and output is visible here. 2025-08-29 14:49:32.031508 | orchestrator | 2025-08-29 14:49:32.031625 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 14:49:32.031645 | orchestrator | 2025-08-29 14:49:32.031658 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 14:49:32.031671 | orchestrator | Friday 29 August 2025 14:49:09 +0000 (0:00:00.236) 0:00:00.236 ********* 2025-08-29 14:49:32.031683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:49:32.031696 | orchestrator | 2025-08-29 14:49:32.031755 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 14:49:32.031767 | orchestrator | Friday 29 August 2025 14:49:09 +0000 (0:00:00.239) 0:00:00.476 ********* 2025-08-29 14:49:32.031779 | orchestrator | changed: [testbed-manager] 2025-08-29 14:49:32.031791 | orchestrator | 2025-08-29 14:49:32.031802 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 14:49:32.031813 | orchestrator | Friday 29 August 2025 14:49:10 +0000 (0:00:01.187) 0:00:01.664 ********* 2025-08-29 14:49:32.031824 | orchestrator | changed: [testbed-manager] 2025-08-29 14:49:32.031834 | orchestrator | 2025-08-29 14:49:32.031846 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 14:49:32.031867 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:10.386) 0:00:12.051 ********* 2025-08-29 14:49:32.031878 | orchestrator | ok: [testbed-manager] 2025-08-29 14:49:32.031890 | orchestrator | 2025-08-29 14:49:32.031901 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 14:49:32.031911 | orchestrator | Friday 29 August 2025 14:49:22 +0000 (0:00:01.304) 0:00:13.355 ********* 2025-08-29 14:49:32.031922 | orchestrator | changed: [testbed-manager] 2025-08-29 14:49:32.031933 | orchestrator | 2025-08-29 14:49:32.031943 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 14:49:32.031954 | orchestrator | Friday 29 August 2025 14:49:23 +0000 (0:00:00.949) 0:00:14.305 ********* 2025-08-29 14:49:32.031965 | orchestrator | ok: [testbed-manager] 2025-08-29 14:49:32.031976 | orchestrator | 2025-08-29 14:49:32.031986 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 14:49:32.031998 | orchestrator | Friday 29 August 2025 14:49:24 +0000 (0:00:01.194) 0:00:15.500 ********* 2025-08-29 14:49:32.032009 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:49:32.032020 | orchestrator | 2025-08-29 14:49:32.032030 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 14:49:32.032041 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.827) 0:00:16.328 ********* 2025-08-29 14:49:32.032052 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:49:32.032062 | orchestrator | 2025-08-29 14:49:32.032073 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 14:49:32.032085 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.162) 0:00:16.490 ********* 2025-08-29 14:49:32.032114 | orchestrator | changed: [testbed-manager] 2025-08-29 14:49:32.032125 | orchestrator | 2025-08-29 14:49:32.032136 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 14:49:32.032147 | orchestrator | Friday 29 August 2025 14:49:26 +0000 (0:00:00.993) 0:00:17.484 ********* 2025-08-29 14:49:32.032157 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 14:49:32.032168 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 14:49:32.032179 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 14:49:32.032190 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 14:49:32.032201 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 14:49:32.032212 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 14:49:32.032223 | orchestrator | 2025-08-29 14:49:32.032233 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 14:49:32.032244 | orchestrator | Friday 29 August 2025 14:49:28 +0000 (0:00:02.190) 0:00:19.674 ********* 2025-08-29 14:49:32.032255 | orchestrator | ok: [testbed-manager] 2025-08-29 14:49:32.032266 | orchestrator | 2025-08-29 14:49:32.032276 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 14:49:32.032287 | orchestrator | Friday 29 August 2025 14:49:30 +0000 (0:00:01.392) 0:00:21.067 ********* 2025-08-29 14:49:32.032298 | orchestrator | changed: [testbed-manager] 2025-08-29 14:49:32.032350 | orchestrator | 2025-08-29 14:49:32.032367 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:49:32.032386 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:49:32.032397 | orchestrator | 2025-08-29 14:49:32.032408 | orchestrator | 2025-08-29 14:49:32.032419 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:49:32.032444 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:01.434) 0:00:22.501 ********* 2025-08-29 14:49:32.032455 | orchestrator | =============================================================================== 2025-08-29 14:49:32.032466 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.39s 2025-08-29 14:49:32.032477 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.19s 2025-08-29 14:49:32.032487 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2025-08-29 14:49:32.032498 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2025-08-29 14:49:32.032526 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.30s 2025-08-29 14:49:32.032537 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2025-08-29 14:49:32.032548 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.19s 2025-08-29 14:49:32.032559 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.99s 2025-08-29 14:49:32.032570 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2025-08-29 14:49:32.032580 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.83s 2025-08-29 14:49:32.032591 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2025-08-29 14:49:32.032602 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-08-29 14:49:32.342594 | orchestrator | 2025-08-29 14:49:32.344661 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 14:49:32 UTC 2025 2025-08-29 14:49:32.344751 | orchestrator | 2025-08-29 14:49:34.297631 | orchestrator | 2025-08-29 14:49:34 | INFO  | Collection nutshell is prepared for execution 2025-08-29 14:49:34.297739 | orchestrator | 2025-08-29 14:49:34 | INFO  | D [0] - dotfiles 2025-08-29 14:49:44.468454 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [0] - homer 2025-08-29 14:49:44.468559 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [0] - netdata 2025-08-29 14:49:44.468575 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [0] - openstackclient 2025-08-29 14:49:44.469120 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [0] - phpmyadmin 2025-08-29 14:49:44.469739 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [0] - common 2025-08-29 14:49:44.473723 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [1] -- loadbalancer 2025-08-29 14:49:44.473765 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [2] --- opensearch 2025-08-29 14:49:44.473777 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [2] --- mariadb-ng 2025-08-29 14:49:44.474148 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [3] ---- horizon 2025-08-29 14:49:44.474173 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [3] ---- keystone 2025-08-29 14:49:44.474185 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [4] ----- neutron 2025-08-29 14:49:44.474460 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [5] ------ wait-for-nova 2025-08-29 14:49:44.474483 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [5] ------ octavia 2025-08-29 14:49:44.475784 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [4] ----- barbican 2025-08-29 14:49:44.475808 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [4] ----- designate 2025-08-29 14:49:44.476170 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [4] ----- ironic 2025-08-29 14:49:44.476191 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [4] ----- placement 2025-08-29 14:49:44.476517 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [4] ----- magnum 2025-08-29 14:49:44.477186 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [1] -- openvswitch 2025-08-29 14:49:44.477217 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [2] --- ovn 2025-08-29 14:49:44.477515 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [1] -- memcached 2025-08-29 14:49:44.477537 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [1] -- redis 2025-08-29 14:49:44.477776 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 14:49:44.478112 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [0] - kubernetes 2025-08-29 14:49:44.480693 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [1] -- kubeconfig 2025-08-29 14:49:44.480720 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 14:49:44.481439 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [0] - ceph 2025-08-29 14:49:44.484836 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [1] -- ceph-pools 2025-08-29 14:49:44.484967 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 14:49:44.484977 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [3] ---- cephclient 2025-08-29 14:49:44.484983 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 14:49:44.484989 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 14:49:44.484995 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 14:49:44.485000 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [5] ------ glance 2025-08-29 14:49:44.485014 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [5] ------ cinder 2025-08-29 14:49:44.485019 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [5] ------ nova 2025-08-29 14:49:44.485513 | orchestrator | 2025-08-29 14:49:44 | INFO  | A [4] ----- prometheus 2025-08-29 14:49:44.485563 | orchestrator | 2025-08-29 14:49:44 | INFO  | D [5] ------ grafana 2025-08-29 14:49:44.707986 | orchestrator | 2025-08-29 14:49:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 14:49:44.708105 | orchestrator | 2025-08-29 14:49:44 | INFO  | Tasks are running in the background 2025-08-29 14:49:47.561573 | orchestrator | 2025-08-29 14:49:47 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 14:49:49.698570 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:49:49.698663 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:49:49.700190 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:49:49.703673 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:49:49.703740 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:49:49.704427 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:49:49.705099 | orchestrator | 2025-08-29 14:49:49 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:49:49.705141 | orchestrator | 2025-08-29 14:49:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:52.764272 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:49:52.768373 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:49:52.773483 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:49:52.773995 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:49:52.774422 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:49:52.774919 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:49:52.777648 | orchestrator | 2025-08-29 14:49:52 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:49:52.777744 | orchestrator | 2025-08-29 14:49:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:55.814476 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:49:55.814635 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:49:55.819415 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:49:55.819707 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:49:55.822094 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:49:55.823019 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:49:55.823536 | orchestrator | 2025-08-29 14:49:55 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:49:55.823549 | orchestrator | 2025-08-29 14:49:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:49:58.889426 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:49:58.889570 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:49:58.889582 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:49:58.889592 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:49:58.889600 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:49:58.889608 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:49:58.889616 | orchestrator | 2025-08-29 14:49:58 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:49:58.889625 | orchestrator | 2025-08-29 14:49:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:01.914364 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:01.914465 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:01.914591 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:50:01.915436 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:01.917090 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:01.917421 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:01.918143 | orchestrator | 2025-08-29 14:50:01 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:01.918183 | orchestrator | 2025-08-29 14:50:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:05.007876 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:05.008096 | orchestrator | 2025-08-29 14:50:04 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:05.008118 | orchestrator | 2025-08-29 14:50:05 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:50:05.009481 | orchestrator | 2025-08-29 14:50:05 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:05.013534 | orchestrator | 2025-08-29 14:50:05 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:05.017850 | orchestrator | 2025-08-29 14:50:05 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:05.020974 | orchestrator | 2025-08-29 14:50:05 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:05.021853 | orchestrator | 2025-08-29 14:50:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:08.055650 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:08.056599 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:08.057477 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state STARTED 2025-08-29 14:50:08.058164 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:08.059845 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:08.060065 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:08.061907 | orchestrator | 2025-08-29 14:50:08 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:08.061936 | orchestrator | 2025-08-29 14:50:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:11.177176 | orchestrator | 2025-08-29 14:50:11.177379 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 14:50:11.177398 | orchestrator | 2025-08-29 14:50:11.177407 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 14:50:11.177416 | orchestrator | Friday 29 August 2025 14:49:58 +0000 (0:00:00.794) 0:00:00.794 ********* 2025-08-29 14:50:11.177424 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:50:11.177433 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:50:11.177441 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:11.177449 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:50:11.177457 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:50:11.177465 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:50:11.177473 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:50:11.177480 | orchestrator | 2025-08-29 14:50:11.177511 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 14:50:11.177525 | orchestrator | Friday 29 August 2025 14:50:02 +0000 (0:00:03.769) 0:00:04.563 ********* 2025-08-29 14:50:11.177539 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:50:11.177551 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:50:11.177565 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:50:11.177577 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:50:11.177590 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:50:11.177603 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:50:11.177616 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:50:11.177627 | orchestrator | 2025-08-29 14:50:11.177635 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 14:50:11.177644 | orchestrator | Friday 29 August 2025 14:50:03 +0000 (0:00:01.017) 0:00:05.581 ********* 2025-08-29 14:50:11.177656 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.234372', 'end': '2025-08-29 14:50:03.247801', 'delta': '0:00:00.013429', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177680 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.212340', 'end': '2025-08-29 14:50:03.221772', 'delta': '0:00:00.009432', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177708 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.213241', 'end': '2025-08-29 14:50:03.223630', 'delta': '0:00:00.010389', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177744 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.170449', 'end': '2025-08-29 14:50:03.179074', 'delta': '0:00:00.008625', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177754 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.207007', 'end': '2025-08-29 14:50:03.216538', 'delta': '0:00:00.009531', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177763 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.268887', 'end': '2025-08-29 14:50:03.278035', 'delta': '0:00:00.009148', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177777 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:50:03.406819', 'end': '2025-08-29 14:50:03.414362', 'delta': '0:00:00.007543', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:50:11.177795 | orchestrator | 2025-08-29 14:50:11.177805 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 14:50:11.177814 | orchestrator | Friday 29 August 2025 14:50:04 +0000 (0:00:01.398) 0:00:06.979 ********* 2025-08-29 14:50:11.177823 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:50:11.177832 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:50:11.177841 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:50:11.177849 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:50:11.177859 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:50:11.177873 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:50:11.177886 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:50:11.177896 | orchestrator | 2025-08-29 14:50:11.177906 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 14:50:11.177916 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:01.384) 0:00:08.364 ********* 2025-08-29 14:50:11.177925 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:50:11.177935 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:50:11.177945 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:50:11.177954 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:50:11.177964 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:50:11.177973 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:50:11.177983 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:50:11.177993 | orchestrator | 2025-08-29 14:50:11.178002 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:50:11.178079 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178093 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178103 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178112 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178121 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178168 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178179 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:50:11.178188 | orchestrator | 2025-08-29 14:50:11.178196 | orchestrator | 2025-08-29 14:50:11.178206 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:50:11.178215 | orchestrator | Friday 29 August 2025 14:50:08 +0000 (0:00:01.942) 0:00:10.306 ********* 2025-08-29 14:50:11.178223 | orchestrator | =============================================================================== 2025-08-29 14:50:11.178232 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.77s 2025-08-29 14:50:11.178241 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.94s 2025-08-29 14:50:11.178249 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.40s 2025-08-29 14:50:11.178267 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.38s 2025-08-29 14:50:11.178276 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.02s 2025-08-29 14:50:11.178285 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:11.178294 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:11.178303 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task c22facb2-c2e6-43d7-851a-04c6b080e8bb is in state SUCCESS 2025-08-29 14:50:11.178340 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:11.248178 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:11.248275 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:11.248366 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:11.248383 | orchestrator | 2025-08-29 14:50:11 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:11.248394 | orchestrator | 2025-08-29 14:50:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:14.309118 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:14.309475 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:14.310161 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:14.310798 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:14.311527 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:14.312395 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:14.313122 | orchestrator | 2025-08-29 14:50:14 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:14.313146 | orchestrator | 2025-08-29 14:50:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:17.361250 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:17.361443 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:17.361461 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:17.362201 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:17.362559 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:17.362680 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:17.363339 | orchestrator | 2025-08-29 14:50:17 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:17.363371 | orchestrator | 2025-08-29 14:50:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:20.463853 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:20.464413 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:20.465226 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:20.466143 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:20.467078 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:20.468038 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:20.469113 | orchestrator | 2025-08-29 14:50:20 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:20.469356 | orchestrator | 2025-08-29 14:50:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:23.517422 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:23.517579 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:23.518431 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:23.519222 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:23.520663 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:23.521894 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:23.522672 | orchestrator | 2025-08-29 14:50:23 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:23.524763 | orchestrator | 2025-08-29 14:50:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:26.646255 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:26.646385 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:26.646397 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:26.646405 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:26.646413 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:26.646420 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:26.646428 | orchestrator | 2025-08-29 14:50:26 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:26.646436 | orchestrator | 2025-08-29 14:50:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:29.725512 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:29.725613 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:29.725635 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:29.725653 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:29.725669 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:29.725680 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:29.725718 | orchestrator | 2025-08-29 14:50:29 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:29.725729 | orchestrator | 2025-08-29 14:50:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:32.784454 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:32.784577 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state STARTED 2025-08-29 14:50:32.784590 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:32.784600 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:32.784608 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:32.784616 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:32.784625 | orchestrator | 2025-08-29 14:50:32 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:32.784635 | orchestrator | 2025-08-29 14:50:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:35.851052 | orchestrator | 2025-08-29 14:50:35 | INFO  | [1mTask f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:35.851176 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task d919189b-0a6d-449c-a44f-91b5a9b3b733 is in state SUCCESS 2025-08-29 14:50:35.851189 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:35.852975 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:35.854502 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:35.854887 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:35.857159 | orchestrator | 2025-08-29 14:50:35 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:35.857244 | orchestrator | 2025-08-29 14:50:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:39.075202 | orchestrator | 2025-08-29 14:50:39 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:39.080705 | orchestrator | 2025-08-29 14:50:39 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:39.082324 | orchestrator | 2025-08-29 14:50:39 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:39.082740 | orchestrator | 2025-08-29 14:50:39 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:39.083700 | orchestrator | 2025-08-29 14:50:39 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:39.084469 | orchestrator | 2025-08-29 14:50:39 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:39.084479 | orchestrator | 2025-08-29 14:50:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:42.112544 | orchestrator | 2025-08-29 14:50:42 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:42.112651 | orchestrator | 2025-08-29 14:50:42 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:42.115363 | orchestrator | 2025-08-29 14:50:42 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:42.115728 | orchestrator | 2025-08-29 14:50:42 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:42.116172 | orchestrator | 2025-08-29 14:50:42 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:42.116688 | orchestrator | 2025-08-29 14:50:42 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:42.116725 | orchestrator | 2025-08-29 14:50:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:45.141897 | orchestrator | 2025-08-29 14:50:45 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:45.141988 | orchestrator | 2025-08-29 14:50:45 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:45.142414 | orchestrator | 2025-08-29 14:50:45 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:45.143223 | orchestrator | 2025-08-29 14:50:45 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:45.144050 | orchestrator | 2025-08-29 14:50:45 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:45.144822 | orchestrator | 2025-08-29 14:50:45 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:45.144846 | orchestrator | 2025-08-29 14:50:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:48.209822 | orchestrator | 2025-08-29 14:50:48 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:48.211160 | orchestrator | 2025-08-29 14:50:48 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state STARTED 2025-08-29 14:50:48.213856 | orchestrator | 2025-08-29 14:50:48 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:48.215833 | orchestrator | 2025-08-29 14:50:48 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:48.217905 | orchestrator | 2025-08-29 14:50:48 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:48.221333 | orchestrator | 2025-08-29 14:50:48 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:48.221755 | orchestrator | 2025-08-29 14:50:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:51.272762 | orchestrator | 2025-08-29 14:50:51 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:51.272894 | orchestrator | 2025-08-29 14:50:51 | INFO  | Task a7982df2-b726-406e-a91c-7f950ec9231b is in state SUCCESS 2025-08-29 14:50:51.274478 | orchestrator | 2025-08-29 14:50:51 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:51.275203 | orchestrator | 2025-08-29 14:50:51 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:51.280680 | orchestrator | 2025-08-29 14:50:51 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:51.281466 | orchestrator | 2025-08-29 14:50:51 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:51.281488 | orchestrator | 2025-08-29 14:50:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:54.328597 | orchestrator | 2025-08-29 14:50:54 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:54.330386 | orchestrator | 2025-08-29 14:50:54 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:54.333032 | orchestrator | 2025-08-29 14:50:54 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:54.333838 | orchestrator | 2025-08-29 14:50:54 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:54.335292 | orchestrator | 2025-08-29 14:50:54 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:54.335355 | orchestrator | 2025-08-29 14:50:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:50:57.398640 | orchestrator | 2025-08-29 14:50:57 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state STARTED 2025-08-29 14:50:57.398894 | orchestrator | 2025-08-29 14:50:57 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:50:57.401420 | orchestrator | 2025-08-29 14:50:57 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:50:57.405928 | orchestrator | 2025-08-29 14:50:57 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:50:57.409022 | orchestrator | 2025-08-29 14:50:57 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:50:57.409069 | orchestrator | 2025-08-29 14:50:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:00.486867 | orchestrator | 2025-08-29 14:51:00.486967 | orchestrator | 2025-08-29 14:51:00.486985 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 14:51:00.486998 | orchestrator | 2025-08-29 14:51:00.487010 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 14:51:00.487030 | orchestrator | Friday 29 August 2025 14:49:56 +0000 (0:00:00.886) 0:00:00.886 ********* 2025-08-29 14:51:00.487042 | orchestrator | ok: [testbed-manager] => { 2025-08-29 14:51:00.487054 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 14:51:00.487067 | orchestrator | } 2025-08-29 14:51:00.487078 | orchestrator | 2025-08-29 14:51:00.487089 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 14:51:00.487100 | orchestrator | Friday 29 August 2025 14:49:56 +0000 (0:00:00.370) 0:00:01.257 ********* 2025-08-29 14:51:00.487111 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.487123 | orchestrator | 2025-08-29 14:51:00.487134 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 14:51:00.487145 | orchestrator | Friday 29 August 2025 14:49:58 +0000 (0:00:01.990) 0:00:03.247 ********* 2025-08-29 14:51:00.487156 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 14:51:00.487167 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 14:51:00.487178 | orchestrator | 2025-08-29 14:51:00.487189 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 14:51:00.487200 | orchestrator | Friday 29 August 2025 14:50:00 +0000 (0:00:01.455) 0:00:04.703 ********* 2025-08-29 14:51:00.487211 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.487247 | orchestrator | 2025-08-29 14:51:00.487260 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 14:51:00.487272 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:01.504) 0:00:06.207 ********* 2025-08-29 14:51:00.487283 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.487358 | orchestrator | 2025-08-29 14:51:00.487374 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 14:51:00.487389 | orchestrator | Friday 29 August 2025 14:50:03 +0000 (0:00:01.220) 0:00:07.428 ********* 2025-08-29 14:51:00.487405 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 14:51:00.487419 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.487431 | orchestrator | 2025-08-29 14:51:00.487444 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 14:51:00.487456 | orchestrator | Friday 29 August 2025 14:50:29 +0000 (0:00:26.132) 0:00:33.561 ********* 2025-08-29 14:51:00.487487 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.487500 | orchestrator | 2025-08-29 14:51:00.487513 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:00.487526 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.487540 | orchestrator | 2025-08-29 14:51:00.487552 | orchestrator | 2025-08-29 14:51:00.487564 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:00.487577 | orchestrator | Friday 29 August 2025 14:50:34 +0000 (0:00:05.074) 0:00:38.635 ********* 2025-08-29 14:51:00.487589 | orchestrator | =============================================================================== 2025-08-29 14:51:00.487602 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.13s 2025-08-29 14:51:00.487615 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.07s 2025-08-29 14:51:00.487627 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.99s 2025-08-29 14:51:00.487639 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.50s 2025-08-29 14:51:00.487652 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.46s 2025-08-29 14:51:00.487664 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.22s 2025-08-29 14:51:00.487676 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.37s 2025-08-29 14:51:00.487688 | orchestrator | 2025-08-29 14:51:00.487700 | orchestrator | 2025-08-29 14:51:00.487713 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 14:51:00.487725 | orchestrator | 2025-08-29 14:51:00.487738 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 14:51:00.487750 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:00.485) 0:00:00.486 ********* 2025-08-29 14:51:00.487764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 14:51:00.487777 | orchestrator | 2025-08-29 14:51:00.487787 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 14:51:00.487798 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:00.442) 0:00:00.928 ********* 2025-08-29 14:51:00.487809 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 14:51:00.487819 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 14:51:00.487830 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 14:51:00.487841 | orchestrator | 2025-08-29 14:51:00.487852 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 14:51:00.487862 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:01.343) 0:00:02.271 ********* 2025-08-29 14:51:00.487872 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.487881 | orchestrator | 2025-08-29 14:51:00.487891 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 14:51:00.487901 | orchestrator | Friday 29 August 2025 14:49:59 +0000 (0:00:02.075) 0:00:04.346 ********* 2025-08-29 14:51:00.487925 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 14:51:00.487936 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.487946 | orchestrator | 2025-08-29 14:51:00.487955 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 14:51:00.487965 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:43.352) 0:00:47.699 ********* 2025-08-29 14:51:00.487979 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.487989 | orchestrator | 2025-08-29 14:51:00.487999 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 14:51:00.488008 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:01.280) 0:00:48.979 ********* 2025-08-29 14:51:00.488025 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.488034 | orchestrator | 2025-08-29 14:51:00.488044 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 14:51:00.488054 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.819) 0:00:49.798 ********* 2025-08-29 14:51:00.488063 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.488073 | orchestrator | 2025-08-29 14:51:00.488082 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 14:51:00.488092 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:03.273) 0:00:53.071 ********* 2025-08-29 14:51:00.488102 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.488111 | orchestrator | 2025-08-29 14:51:00.488121 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 14:51:00.488131 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:00.775) 0:00:53.847 ********* 2025-08-29 14:51:00.488140 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.488150 | orchestrator | 2025-08-29 14:51:00.488159 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 14:51:00.488169 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:01.083) 0:00:54.931 ********* 2025-08-29 14:51:00.488179 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.488188 | orchestrator | 2025-08-29 14:51:00.488198 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:00.488207 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.488217 | orchestrator | 2025-08-29 14:51:00.488227 | orchestrator | 2025-08-29 14:51:00.488236 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:00.488246 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:00.711) 0:00:55.642 ********* 2025-08-29 14:51:00.488255 | orchestrator | =============================================================================== 2025-08-29 14:51:00.488265 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 43.35s 2025-08-29 14:51:00.488274 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.27s 2025-08-29 14:51:00.488284 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.08s 2025-08-29 14:51:00.488311 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.34s 2025-08-29 14:51:00.488323 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.28s 2025-08-29 14:51:00.488333 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.08s 2025-08-29 14:51:00.488342 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.82s 2025-08-29 14:51:00.488352 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2025-08-29 14:51:00.488361 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.71s 2025-08-29 14:51:00.488371 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2025-08-29 14:51:00.488380 | orchestrator | 2025-08-29 14:51:00.488389 | orchestrator | 2025-08-29 14:51:00.488399 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:51:00.488408 | orchestrator | 2025-08-29 14:51:00.488418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:51:00.488428 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:00.528) 0:00:00.528 ********* 2025-08-29 14:51:00.488437 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 14:51:00.488447 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 14:51:00.488456 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 14:51:00.488466 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 14:51:00.488475 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 14:51:00.488485 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 14:51:00.488500 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 14:51:00.488510 | orchestrator | 2025-08-29 14:51:00.488519 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 14:51:00.488529 | orchestrator | 2025-08-29 14:51:00.488539 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 14:51:00.488548 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:00.840) 0:00:01.368 ********* 2025-08-29 14:51:00.488569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:51:00.488585 | orchestrator | 2025-08-29 14:51:00.488595 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 14:51:00.488605 | orchestrator | Friday 29 August 2025 14:49:59 +0000 (0:00:01.634) 0:00:03.003 ********* 2025-08-29 14:51:00.488615 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:00.488624 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:00.488634 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:00.488643 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:00.488653 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:00.488669 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:00.488679 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.488689 | orchestrator | 2025-08-29 14:51:00.488698 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 14:51:00.488708 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:01.691) 0:00:04.694 ********* 2025-08-29 14:51:00.488721 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.488731 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:00.488740 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:00.488750 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:00.488759 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:00.488769 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:00.488778 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:00.488788 | orchestrator | 2025-08-29 14:51:00.488797 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 14:51:00.488807 | orchestrator | Friday 29 August 2025 14:50:04 +0000 (0:00:02.959) 0:00:07.654 ********* 2025-08-29 14:51:00.488816 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.488826 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:00.488835 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:00.488845 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:00.488855 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:00.488864 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:00.488874 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:00.488883 | orchestrator | 2025-08-29 14:51:00.488893 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 14:51:00.488903 | orchestrator | Friday 29 August 2025 14:50:05 +0000 (0:00:01.659) 0:00:09.314 ********* 2025-08-29 14:51:00.488912 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:00.488922 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:00.488931 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:00.488941 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:00.488950 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:00.488960 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:00.488969 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.488979 | orchestrator | 2025-08-29 14:51:00.488989 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 14:51:00.488998 | orchestrator | Friday 29 August 2025 14:50:18 +0000 (0:00:12.437) 0:00:21.751 ********* 2025-08-29 14:51:00.489008 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:00.489017 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:00.489027 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:00.489037 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:00.489051 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:00.489061 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:00.489070 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.489081 | orchestrator | 2025-08-29 14:51:00.489090 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 14:51:00.489100 | orchestrator | Friday 29 August 2025 14:50:37 +0000 (0:00:19.345) 0:00:41.096 ********* 2025-08-29 14:51:00.489110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-08-29 14:51:00.489121 | orchestrator | 2025-08-29 14:51:00.489131 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 14:51:00.489141 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:01.183) 0:00:42.279 ********* 2025-08-29 14:51:00.489150 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 14:51:00.489160 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 14:51:00.489170 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 14:51:00.489179 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 14:51:00.489189 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 14:51:00.489199 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 14:51:00.489208 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 14:51:00.489218 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 14:51:00.489227 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 14:51:00.489237 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 14:51:00.489246 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 14:51:00.489256 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 14:51:00.489266 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 14:51:00.489275 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 14:51:00.489285 | orchestrator | 2025-08-29 14:51:00.489312 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 14:51:00.489331 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:05.385) 0:00:47.665 ********* 2025-08-29 14:51:00.489349 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.489367 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:00.489385 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:00.489403 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:00.489421 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:00.489438 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:00.489456 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:00.489472 | orchestrator | 2025-08-29 14:51:00.489488 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 14:51:00.489505 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:01.209) 0:00:48.874 ********* 2025-08-29 14:51:00.489522 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:00.489538 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:00.489554 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:00.489570 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:00.489585 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.489601 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:00.489618 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:00.489635 | orchestrator | 2025-08-29 14:51:00.489652 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 14:51:00.489673 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:01.265) 0:00:50.140 ********* 2025-08-29 14:51:00.489690 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:00.489705 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:00.489721 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:00.489737 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:00.489764 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.489781 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:00.489796 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:00.489812 | orchestrator | 2025-08-29 14:51:00.489836 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 14:51:00.489853 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:01.539) 0:00:51.679 ********* 2025-08-29 14:51:00.489870 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:51:00.489880 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:51:00.489890 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:51:00.489899 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:51:00.489908 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:51:00.489918 | orchestrator | ok: [testbed-manager] 2025-08-29 14:51:00.489927 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:51:00.489937 | orchestrator | 2025-08-29 14:51:00.489949 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 14:51:00.489966 | orchestrator | Friday 29 August 2025 14:50:51 +0000 (0:00:03.457) 0:00:55.137 ********* 2025-08-29 14:51:00.489983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 14:51:00.490001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:51:00.490075 | orchestrator | 2025-08-29 14:51:00.490089 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 14:51:00.490099 | orchestrator | Friday 29 August 2025 14:50:53 +0000 (0:00:01.564) 0:00:56.702 ********* 2025-08-29 14:51:00.490109 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.490118 | orchestrator | 2025-08-29 14:51:00.490128 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 14:51:00.490138 | orchestrator | Friday 29 August 2025 14:50:55 +0000 (0:00:01.828) 0:00:58.531 ********* 2025-08-29 14:51:00.490148 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:00.490157 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:00.490166 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:00.490176 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:00.490185 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:00.490195 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:00.490204 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:00.490214 | orchestrator | 2025-08-29 14:51:00.490223 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:00.490236 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490253 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490271 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490289 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490339 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490350 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490360 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:00.490369 | orchestrator | 2025-08-29 14:51:00.490379 | orchestrator | 2025-08-29 14:51:00.490389 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:00.490407 | orchestrator | Friday 29 August 2025 14:50:57 +0000 (0:00:02.750) 0:01:01.281 ********* 2025-08-29 14:51:00.490417 | orchestrator | =============================================================================== 2025-08-29 14:51:00.490426 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.35s 2025-08-29 14:51:00.490436 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.44s 2025-08-29 14:51:00.490445 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.39s 2025-08-29 14:51:00.490455 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.46s 2025-08-29 14:51:00.490465 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.96s 2025-08-29 14:51:00.490474 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.75s 2025-08-29 14:51:00.490484 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.83s 2025-08-29 14:51:00.490493 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.69s 2025-08-29 14:51:00.490503 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.66s 2025-08-29 14:51:00.490512 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.63s 2025-08-29 14:51:00.490522 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.56s 2025-08-29 14:51:00.490541 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.54s 2025-08-29 14:51:00.490551 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.27s 2025-08-29 14:51:00.490560 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2025-08-29 14:51:00.490571 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.18s 2025-08-29 14:51:00.490580 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-08-29 14:51:00.490590 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task f93ffe89-4559-4aba-b041-075c12bc4331 is in state SUCCESS 2025-08-29 14:51:00.490601 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:00.490610 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:00.490620 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:51:00.490630 | orchestrator | 2025-08-29 14:51:00 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:00.490640 | orchestrator | 2025-08-29 14:51:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:03.535466 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:03.537555 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:03.537623 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state STARTED 2025-08-29 14:51:03.540018 | orchestrator | 2025-08-29 14:51:03 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:03.540097 | orchestrator | 2025-08-29 14:51:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:06.597258 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:06.597397 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:06.599276 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 68096644-88cc-4814-af3b-b5a061402ba5 is in state SUCCESS 2025-08-29 14:51:06.602907 | orchestrator | 2025-08-29 14:51:06 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:06.603699 | orchestrator | 2025-08-29 14:51:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:09.657901 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:09.659570 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:09.662272 | orchestrator | 2025-08-29 14:51:09 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:09.662352 | orchestrator | 2025-08-29 14:51:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:12.711701 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:12.714367 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:12.716214 | orchestrator | 2025-08-29 14:51:12 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:12.716702 | orchestrator | 2025-08-29 14:51:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:15.746061 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:15.746911 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:15.747788 | orchestrator | 2025-08-29 14:51:15 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:15.747810 | orchestrator | 2025-08-29 14:51:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:18.778245 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:18.780406 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:18.781898 | orchestrator | 2025-08-29 14:51:18 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:18.782077 | orchestrator | 2025-08-29 14:51:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:21.819057 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:21.819504 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:21.820422 | orchestrator | 2025-08-29 14:51:21 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:21.820466 | orchestrator | 2025-08-29 14:51:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:24.871116 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:24.872554 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:24.873809 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:24.873838 | orchestrator | 2025-08-29 14:51:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:27.904633 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:27.907716 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:27.910777 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:27.910878 | orchestrator | 2025-08-29 14:51:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:30.949357 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:30.950441 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:30.952207 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:30.952256 | orchestrator | 2025-08-29 14:51:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:34.018871 | orchestrator | 2025-08-29 14:51:34 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:34.019985 | orchestrator | 2025-08-29 14:51:34 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:34.021453 | orchestrator | 2025-08-29 14:51:34 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:34.021498 | orchestrator | 2025-08-29 14:51:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:37.068998 | orchestrator | 2025-08-29 14:51:37 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:37.071641 | orchestrator | 2025-08-29 14:51:37 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:37.071703 | orchestrator | 2025-08-29 14:51:37 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:37.071709 | orchestrator | 2025-08-29 14:51:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:40.106273 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:40.107472 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:40.108763 | orchestrator | 2025-08-29 14:51:40 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:40.108833 | orchestrator | 2025-08-29 14:51:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:43.151699 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:43.154618 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:43.157216 | orchestrator | 2025-08-29 14:51:43 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:43.157810 | orchestrator | 2025-08-29 14:51:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:46.201442 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:46.202526 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:46.203960 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:46.203988 | orchestrator | 2025-08-29 14:51:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:49.244341 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:49.248681 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:49.253528 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:49.253596 | orchestrator | 2025-08-29 14:51:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:52.296748 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:52.298178 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:52.299444 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:52.299515 | orchestrator | 2025-08-29 14:51:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:55.338235 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:55.339131 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:55.340017 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:55.340059 | orchestrator | 2025-08-29 14:51:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:58.392218 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:51:58.392369 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:51:58.392384 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:51:58.392512 | orchestrator | 2025-08-29 14:51:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:01.441739 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:01.443479 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:01.445159 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:01.445244 | orchestrator | 2025-08-29 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:04.502666 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:04.503742 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:04.504742 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:04.504762 | orchestrator | 2025-08-29 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:07.578299 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:07.578378 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:07.578387 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:07.578395 | orchestrator | 2025-08-29 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:10.616635 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:10.619315 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:10.619513 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:10.619728 | orchestrator | 2025-08-29 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:13.666677 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:13.668821 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:13.670548 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:13.670783 | orchestrator | 2025-08-29 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:16.711724 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:16.712853 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:16.714122 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:16.714143 | orchestrator | 2025-08-29 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:19.756860 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:19.757928 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:19.760294 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:19.760331 | orchestrator | 2025-08-29 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:22.799185 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:22.801323 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:22.802254 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:22.802330 | orchestrator | 2025-08-29 14:52:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:25.846151 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:25.847424 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:25.848659 | orchestrator | 2025-08-29 14:52:25 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:25.848684 | orchestrator | 2025-08-29 14:52:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:28.898753 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:28.900240 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:28.902085 | orchestrator | 2025-08-29 14:52:28 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:28.902125 | orchestrator | 2025-08-29 14:52:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:31.936152 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:31.937960 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:31.939430 | orchestrator | 2025-08-29 14:52:31 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:31.939477 | orchestrator | 2025-08-29 14:52:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:34.977146 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:34.979941 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:34.987014 | orchestrator | 2025-08-29 14:52:34 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:34.987074 | orchestrator | 2025-08-29 14:52:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:38.021204 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:38.025566 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:38.026327 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:38.026348 | orchestrator | 2025-08-29 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:41.073386 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state STARTED 2025-08-29 14:52:41.075985 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:41.078798 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:41.078836 | orchestrator | 2025-08-29 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:44.131705 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 9ca983bc-33d9-4a71-93fc-ae1a3791475a is in state SUCCESS 2025-08-29 14:52:44.132569 | orchestrator | 2025-08-29 14:52:44.132608 | orchestrator | 2025-08-29 14:52:44.132617 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 14:52:44.132648 | orchestrator | 2025-08-29 14:52:44.132653 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 14:52:44.132663 | orchestrator | Friday 29 August 2025 14:50:12 +0000 (0:00:00.238) 0:00:00.238 ********* 2025-08-29 14:52:44.132667 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.132673 | orchestrator | 2025-08-29 14:52:44.132678 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 14:52:44.132684 | orchestrator | Friday 29 August 2025 14:50:13 +0000 (0:00:01.090) 0:00:01.329 ********* 2025-08-29 14:52:44.132691 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 14:52:44.132697 | orchestrator | 2025-08-29 14:52:44.132704 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 14:52:44.132709 | orchestrator | Friday 29 August 2025 14:50:14 +0000 (0:00:00.781) 0:00:02.110 ********* 2025-08-29 14:52:44.132716 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.132722 | orchestrator | 2025-08-29 14:52:44.132728 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 14:52:44.132734 | orchestrator | Friday 29 August 2025 14:50:15 +0000 (0:00:01.280) 0:00:03.390 ********* 2025-08-29 14:52:44.132740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 14:52:44.132746 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.132752 | orchestrator | 2025-08-29 14:52:44.132757 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 14:52:44.132763 | orchestrator | Friday 29 August 2025 14:51:01 +0000 (0:00:45.462) 0:00:48.853 ********* 2025-08-29 14:52:44.132769 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.132775 | orchestrator | 2025-08-29 14:52:44.132781 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:44.132897 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.132970 | orchestrator | 2025-08-29 14:52:44.132977 | orchestrator | 2025-08-29 14:52:44.132984 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:44.133012 | orchestrator | Friday 29 August 2025 14:51:05 +0000 (0:00:04.445) 0:00:53.299 ********* 2025-08-29 14:52:44.133019 | orchestrator | =============================================================================== 2025-08-29 14:52:44.133026 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.46s 2025-08-29 14:52:44.133032 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.45s 2025-08-29 14:52:44.133038 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.28s 2025-08-29 14:52:44.133045 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.09s 2025-08-29 14:52:44.133051 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.78s 2025-08-29 14:52:44.133056 | orchestrator | 2025-08-29 14:52:44.133062 | orchestrator | 2025-08-29 14:52:44.133069 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 14:52:44.133075 | orchestrator | 2025-08-29 14:52:44.133080 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:52:44.133086 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.267) 0:00:00.267 ********* 2025-08-29 14:52:44.133093 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:52:44.133100 | orchestrator | 2025-08-29 14:52:44.133106 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 14:52:44.133112 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:01.266) 0:00:01.533 ********* 2025-08-29 14:52:44.133118 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133124 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133130 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133135 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133141 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133148 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133154 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133160 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133167 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133175 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133181 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133187 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133193 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133200 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133206 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:52:44.133213 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133232 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133239 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:52:44.133299 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133308 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133315 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:52:44.133333 | orchestrator | 2025-08-29 14:52:44.133340 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:52:44.133346 | orchestrator | Friday 29 August 2025 14:49:54 +0000 (0:00:03.967) 0:00:05.501 ********* 2025-08-29 14:52:44.133353 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:52:44.133361 | orchestrator | 2025-08-29 14:52:44.133367 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 14:52:44.133374 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:01.170) 0:00:06.672 ********* 2025-08-29 14:52:44.133386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133405 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133420 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.133501 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.133643 | orchestrator | 2025-08-29 14:52:44.133650 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 14:52:44.133661 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:05.896) 0:00:12.568 ********* 2025-08-29 14:52:44.133672 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133693 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:52:44.133700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133720 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:44.133726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133766 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:44.133772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:44.133800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133856 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:44.133862 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:44.133868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133894 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:44.133901 | orchestrator | 2025-08-29 14:52:44.133907 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 14:52:44.133913 | orchestrator | Friday 29 August 2025 14:50:03 +0000 (0:00:01.567) 0:00:14.135 ********* 2025-08-29 14:52:44.133920 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133942 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133948 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:52:44.133956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.133982 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:44.133988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.133994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.134130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.134158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134172 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:44.134179 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:44.134186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.134216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134226 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:44.134232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134239 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:44.134310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:52:44.134323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.134342 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:44.134349 | orchestrator | 2025-08-29 14:52:44.134355 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 14:52:44.134362 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:03.151) 0:00:17.287 ********* 2025-08-29 14:52:44.134387 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:52:44.134393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:44.134397 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:44.134401 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:44.134406 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:44.134412 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:44.134418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:44.134425 | orchestrator | 2025-08-29 14:52:44.134431 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 14:52:44.134447 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:00.717) 0:00:18.005 ********* 2025-08-29 14:52:44.134454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:52:44.134461 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:52:44.134467 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:52:44.134474 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:52:44.134480 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:52:44.134486 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:52:44.134493 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:52:44.134499 | orchestrator | 2025-08-29 14:52:44.134506 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 14:52:44.134512 | orchestrator | Friday 29 August 2025 14:50:08 +0000 (0:00:01.125) 0:00:19.130 ********* 2025-08-29 14:52:44.134528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134622 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.134634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134695 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134722 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.134729 | orchestrator | 2025-08-29 14:52:44.134734 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 14:52:44.134741 | orchestrator | Friday 29 August 2025 14:50:14 +0000 (0:00:06.509) 0:00:25.640 ********* 2025-08-29 14:52:44.134747 | orchestrator | [WARNING]: Skipped 2025-08-29 14:52:44.134756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 14:52:44.134762 | orchestrator | to this access issue: 2025-08-29 14:52:44.134770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 14:52:44.134776 | orchestrator | directory 2025-08-29 14:52:44.134783 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:52:44.134791 | orchestrator | 2025-08-29 14:52:44.134797 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 14:52:44.134804 | orchestrator | Friday 29 August 2025 14:50:16 +0000 (0:00:01.563) 0:00:27.203 ********* 2025-08-29 14:52:44.134810 | orchestrator | [WARNING]: Skipped 2025-08-29 14:52:44.134817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 14:52:44.134824 | orchestrator | to this access issue: 2025-08-29 14:52:44.134830 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 14:52:44.134837 | orchestrator | directory 2025-08-29 14:52:44.134843 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:52:44.134850 | orchestrator | 2025-08-29 14:52:44.134855 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 14:52:44.134861 | orchestrator | Friday 29 August 2025 14:50:17 +0000 (0:00:01.219) 0:00:28.422 ********* 2025-08-29 14:52:44.134868 | orchestrator | [WARNING]: Skipped 2025-08-29 14:52:44.134874 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 14:52:44.134913 | orchestrator | to this access issue: 2025-08-29 14:52:44.134923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 14:52:44.134938 | orchestrator | directory 2025-08-29 14:52:44.134945 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:52:44.134951 | orchestrator | 2025-08-29 14:52:44.134963 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 14:52:44.134970 | orchestrator | Friday 29 August 2025 14:50:18 +0000 (0:00:01.433) 0:00:29.855 ********* 2025-08-29 14:52:44.134977 | orchestrator | [WARNING]: Skipped 2025-08-29 14:52:44.134988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 14:52:44.134996 | orchestrator | to this access issue: 2025-08-29 14:52:44.135003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 14:52:44.135009 | orchestrator | directory 2025-08-29 14:52:44.135016 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:52:44.135023 | orchestrator | 2025-08-29 14:52:44.135031 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 14:52:44.135038 | orchestrator | Friday 29 August 2025 14:50:19 +0000 (0:00:01.136) 0:00:30.992 ********* 2025-08-29 14:52:44.135045 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.135052 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.135058 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.135064 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.135071 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.135078 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.135085 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.135091 | orchestrator | 2025-08-29 14:52:44.135098 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 14:52:44.135105 | orchestrator | Friday 29 August 2025 14:50:24 +0000 (0:00:04.610) 0:00:35.602 ********* 2025-08-29 14:52:44.135111 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135119 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135125 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135132 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135138 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135144 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135151 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:52:44.135157 | orchestrator | 2025-08-29 14:52:44.135163 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 14:52:44.135169 | orchestrator | Friday 29 August 2025 14:50:29 +0000 (0:00:04.683) 0:00:40.286 ********* 2025-08-29 14:52:44.135176 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.135182 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.135188 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.135195 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.135201 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.135207 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.135213 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.135220 | orchestrator | 2025-08-29 14:52:44.135227 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 14:52:44.135233 | orchestrator | Friday 29 August 2025 14:50:32 +0000 (0:00:03.579) 0:00:43.865 ********* 2025-08-29 14:52:44.135240 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135275 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135284 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135331 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135351 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135369 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135389 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135396 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135418 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135426 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135433 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135445 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:52:44.135461 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135468 | orchestrator | 2025-08-29 14:52:44.135475 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 14:52:44.135481 | orchestrator | Friday 29 August 2025 14:50:36 +0000 (0:00:03.262) 0:00:47.128 ********* 2025-08-29 14:52:44.135487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135500 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135506 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135513 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135523 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135530 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:52:44.135536 | orchestrator | 2025-08-29 14:52:44.135541 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 14:52:44.135548 | orchestrator | Friday 29 August 2025 14:50:39 +0000 (0:00:03.349) 0:00:50.478 ********* 2025-08-29 14:52:44.135554 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135561 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135567 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135573 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135579 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135585 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135591 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:52:44.135597 | orchestrator | 2025-08-29 14:52:44.135603 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 14:52:44.135609 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:02.825) 0:00:53.303 ********* 2025-08-29 14:52:44.135616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135644 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135686 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:52:44.135744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135751 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:52:44.135811 | orchestrator | 2025-08-29 14:52:44.135817 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 14:52:44.135823 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:03.471) 0:00:56.775 ********* 2025-08-29 14:52:44.135830 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.135836 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.135842 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.135847 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.135854 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.135860 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.135866 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.135871 | orchestrator | 2025-08-29 14:52:44.135877 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 14:52:44.135883 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:01.649) 0:00:58.424 ********* 2025-08-29 14:52:44.135915 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.135924 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.135930 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.135937 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.135942 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.135949 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.135955 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.135962 | orchestrator | 2025-08-29 14:52:44.135968 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.135975 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:01.817) 0:01:00.242 ********* 2025-08-29 14:52:44.135982 | orchestrator | 2025-08-29 14:52:44.135988 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.135994 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.132) 0:01:00.375 ********* 2025-08-29 14:52:44.136000 | orchestrator | 2025-08-29 14:52:44.136006 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.136013 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.102) 0:01:00.477 ********* 2025-08-29 14:52:44.136019 | orchestrator | 2025-08-29 14:52:44.136025 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.136031 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.281) 0:01:00.759 ********* 2025-08-29 14:52:44.136038 | orchestrator | 2025-08-29 14:52:44.136045 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.136050 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.095) 0:01:00.854 ********* 2025-08-29 14:52:44.136059 | orchestrator | 2025-08-29 14:52:44.136066 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.136091 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.084) 0:01:00.938 ********* 2025-08-29 14:52:44.136098 | orchestrator | 2025-08-29 14:52:44.136113 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:52:44.136120 | orchestrator | Friday 29 August 2025 14:50:49 +0000 (0:00:00.083) 0:01:01.022 ********* 2025-08-29 14:52:44.136126 | orchestrator | 2025-08-29 14:52:44.136132 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 14:52:44.136139 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:00.130) 0:01:01.152 ********* 2025-08-29 14:52:44.136152 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.136164 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.136169 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.136175 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.136184 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.136195 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.136202 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.136208 | orchestrator | 2025-08-29 14:52:44.136215 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 14:52:44.136222 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:47.906) 0:01:49.058 ********* 2025-08-29 14:52:44.136228 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.136235 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.136241 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.136271 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.136279 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.136285 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.136291 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.136297 | orchestrator | 2025-08-29 14:52:44.136304 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 14:52:44.136310 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:52.845) 0:02:41.903 ********* 2025-08-29 14:52:44.136316 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.136324 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:44.136330 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:44.136336 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:44.136343 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:44.136349 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:44.136356 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:44.136362 | orchestrator | 2025-08-29 14:52:44.136369 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 14:52:44.136376 | orchestrator | Friday 29 August 2025 14:52:33 +0000 (0:00:02.286) 0:02:44.189 ********* 2025-08-29 14:52:44.136382 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.136389 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.136395 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.136402 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.136408 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.136414 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.136420 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.136427 | orchestrator | 2025-08-29 14:52:44.136433 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:44.136440 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136448 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136455 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136462 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136468 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136481 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136487 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:52:44.136494 | orchestrator | 2025-08-29 14:52:44.136500 | orchestrator | 2025-08-29 14:52:44.136507 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:44.136513 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:10.365) 0:02:54.555 ********* 2025-08-29 14:52:44.136520 | orchestrator | =============================================================================== 2025-08-29 14:52:44.136527 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 52.85s 2025-08-29 14:52:44.136534 | orchestrator | common : Restart fluentd container ------------------------------------- 47.91s 2025-08-29 14:52:44.136539 | orchestrator | common : Restart cron container ---------------------------------------- 10.37s 2025-08-29 14:52:44.136545 | orchestrator | common : Copying over config.json files for services -------------------- 6.51s 2025-08-29 14:52:44.136550 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.90s 2025-08-29 14:52:44.136556 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.68s 2025-08-29 14:52:44.136562 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.61s 2025-08-29 14:52:44.136568 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.97s 2025-08-29 14:52:44.136574 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.58s 2025-08-29 14:52:44.136580 | orchestrator | common : Check common containers ---------------------------------------- 3.47s 2025-08-29 14:52:44.136586 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.35s 2025-08-29 14:52:44.136592 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.26s 2025-08-29 14:52:44.136599 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.15s 2025-08-29 14:52:44.136605 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.83s 2025-08-29 14:52:44.136617 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.29s 2025-08-29 14:52:44.136625 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.82s 2025-08-29 14:52:44.136638 | orchestrator | common : Creating log volume -------------------------------------------- 1.65s 2025-08-29 14:52:44.136645 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.57s 2025-08-29 14:52:44.136652 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.56s 2025-08-29 14:52:44.136658 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.43s 2025-08-29 14:52:44.136664 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:44.136671 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:44.136677 | orchestrator | 2025-08-29 14:52:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:47.175622 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:52:47.198287 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state STARTED 2025-08-29 14:52:47.198359 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:52:47.198365 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:52:47.198370 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:47.198392 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:47.198398 | orchestrator | 2025-08-29 14:52:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:50.215063 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:52:50.215162 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state STARTED 2025-08-29 14:52:50.215386 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:52:50.216753 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:52:50.217108 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:50.217661 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:50.217745 | orchestrator | 2025-08-29 14:52:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:53.265125 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:52:53.265341 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state STARTED 2025-08-29 14:52:53.265847 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:52:53.266453 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:52:53.267128 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:53.267815 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:53.267833 | orchestrator | 2025-08-29 14:52:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:56.313722 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:52:56.317381 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state STARTED 2025-08-29 14:52:56.318373 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:52:56.318557 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:52:56.320158 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:56.321548 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:56.321612 | orchestrator | 2025-08-29 14:52:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:59.379527 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:52:59.381016 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state STARTED 2025-08-29 14:52:59.381831 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:52:59.382776 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:52:59.386421 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:52:59.388638 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:52:59.388845 | orchestrator | 2025-08-29 14:52:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:02.472033 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:02.472540 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state STARTED 2025-08-29 14:53:02.473210 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:02.473863 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:02.474705 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:02.475364 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:02.475420 | orchestrator | 2025-08-29 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:05.509888 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:05.510137 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task eb83e881-5f44-425c-b870-2fcb308dbbd3 is in state SUCCESS 2025-08-29 14:53:05.510977 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:05.511890 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:05.514102 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:05.514638 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:05.515704 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:05.515743 | orchestrator | 2025-08-29 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:08.543607 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:08.545473 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:08.547326 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:08.549416 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:08.550660 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:08.550700 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:08.550711 | orchestrator | 2025-08-29 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:11.592024 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:11.592559 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:11.593306 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:11.594133 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:11.594941 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:11.595502 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:11.595741 | orchestrator | 2025-08-29 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:14.633569 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:14.633685 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:14.634201 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:14.635057 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:14.635767 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:14.636659 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:14.636673 | orchestrator | 2025-08-29 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:17.695775 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:17.701377 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:17.703193 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:17.705571 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:17.713162 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:17.715687 | orchestrator | 2025-08-29 14:53:17 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:17.715752 | orchestrator | 2025-08-29 14:53:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:20.740145 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state STARTED 2025-08-29 14:53:20.740314 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:20.740491 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:20.741246 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:20.741874 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:20.742468 | orchestrator | 2025-08-29 14:53:20 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:20.743022 | orchestrator | 2025-08-29 14:53:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:23.789497 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task ff61df73-4bd2-4aa2-a214-cb1f4caa251c is in state SUCCESS 2025-08-29 14:53:23.790207 | orchestrator | 2025-08-29 14:53:23.790296 | orchestrator | 2025-08-29 14:53:23.790317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:53:23.790336 | orchestrator | 2025-08-29 14:53:23.790357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:53:23.790376 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:00.424) 0:00:00.424 ********* 2025-08-29 14:53:23.790397 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:23.790452 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:23.790462 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:23.790472 | orchestrator | 2025-08-29 14:53:23.790482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:53:23.790492 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:00.462) 0:00:00.887 ********* 2025-08-29 14:53:23.790502 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 14:53:23.790513 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 14:53:23.790522 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 14:53:23.790532 | orchestrator | 2025-08-29 14:53:23.790541 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 14:53:23.790551 | orchestrator | 2025-08-29 14:53:23.790560 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 14:53:23.790570 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.878) 0:00:01.765 ********* 2025-08-29 14:53:23.790579 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:53:23.790590 | orchestrator | 2025-08-29 14:53:23.790600 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 14:53:23.790609 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:00.962) 0:00:02.728 ********* 2025-08-29 14:53:23.790619 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:53:23.790629 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:53:23.790639 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:53:23.790648 | orchestrator | 2025-08-29 14:53:23.790658 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 14:53:23.790667 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:00.995) 0:00:03.723 ********* 2025-08-29 14:53:23.790677 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:53:23.790704 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:53:23.790714 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:53:23.790724 | orchestrator | 2025-08-29 14:53:23.790734 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 14:53:23.790743 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:02.114) 0:00:05.838 ********* 2025-08-29 14:53:23.790753 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:23.790763 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:23.790774 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:23.790783 | orchestrator | 2025-08-29 14:53:23.790793 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 14:53:23.790802 | orchestrator | Friday 29 August 2025 14:52:57 +0000 (0:00:02.076) 0:00:07.915 ********* 2025-08-29 14:53:23.790812 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:23.790822 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:23.790831 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:23.790841 | orchestrator | 2025-08-29 14:53:23.790850 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:53:23.790860 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:23.790871 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:23.790881 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:23.790890 | orchestrator | 2025-08-29 14:53:23.790900 | orchestrator | 2025-08-29 14:53:23.790910 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:53:23.790919 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:04.751) 0:00:12.666 ********* 2025-08-29 14:53:23.790936 | orchestrator | =============================================================================== 2025-08-29 14:53:23.790946 | orchestrator | memcached : Restart memcached container --------------------------------- 4.75s 2025-08-29 14:53:23.790955 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.11s 2025-08-29 14:53:23.790965 | orchestrator | memcached : Check memcached container ----------------------------------- 2.08s 2025-08-29 14:53:23.790974 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.00s 2025-08-29 14:53:23.790984 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.96s 2025-08-29 14:53:23.790993 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-08-29 14:53:23.791003 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-08-29 14:53:23.791013 | orchestrator | 2025-08-29 14:53:23.791022 | orchestrator | 2025-08-29 14:53:23.791032 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:53:23.791042 | orchestrator | 2025-08-29 14:53:23.791051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:53:23.791061 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.487) 0:00:00.487 ********* 2025-08-29 14:53:23.791071 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:23.791080 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:23.791090 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:23.791100 | orchestrator | 2025-08-29 14:53:23.791109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:53:23.791136 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.423) 0:00:00.911 ********* 2025-08-29 14:53:23.791146 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 14:53:23.791156 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 14:53:23.791165 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 14:53:23.791175 | orchestrator | 2025-08-29 14:53:23.791184 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 14:53:23.791194 | orchestrator | 2025-08-29 14:53:23.791203 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 14:53:23.791213 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:00.834) 0:00:01.745 ********* 2025-08-29 14:53:23.791244 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:53:23.791254 | orchestrator | 2025-08-29 14:53:23.791264 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 14:53:23.791273 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:00.634) 0:00:02.379 ********* 2025-08-29 14:53:23.791288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791383 | orchestrator | 2025-08-29 14:53:23.791393 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 14:53:23.791403 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:01.553) 0:00:03.933 ********* 2025-08-29 14:53:23.791413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791496 | orchestrator | 2025-08-29 14:53:23.791506 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 14:53:23.791516 | orchestrator | Friday 29 August 2025 14:52:58 +0000 (0:00:03.167) 0:00:07.100 ********* 2025-08-29 14:53:23.791526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791608 | orchestrator | 2025-08-29 14:53:23.791617 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 14:53:23.791627 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:03.208) 0:00:10.309 ********* 2025-08-29 14:53:23.791637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:53:23.791709 | orchestrator | 2025-08-29 14:53:23.791719 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:53:23.791729 | orchestrator | Friday 29 August 2025 14:53:03 +0000 (0:00:02.474) 0:00:12.783 ********* 2025-08-29 14:53:23.791738 | orchestrator | 2025-08-29 14:53:23.791748 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:53:23.791765 | orchestrator | Friday 29 August 2025 14:53:04 +0000 (0:00:00.616) 0:00:13.399 ********* 2025-08-29 14:53:23.791776 | orchestrator | 2025-08-29 14:53:23.791786 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:53:23.791795 | orchestrator | Friday 29 August 2025 14:53:04 +0000 (0:00:00.290) 0:00:13.689 ********* 2025-08-29 14:53:23.791805 | orchestrator | 2025-08-29 14:53:23.791814 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 14:53:23.791824 | orchestrator | Friday 29 August 2025 14:53:04 +0000 (0:00:00.276) 0:00:13.966 ********* 2025-08-29 14:53:23.791840 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:23.791850 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:23.791859 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:23.791869 | orchestrator | 2025-08-29 14:53:23.791878 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 14:53:23.791888 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:13.139) 0:00:27.106 ********* 2025-08-29 14:53:23.791898 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:23.791907 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:23.791917 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:23.791926 | orchestrator | 2025-08-29 14:53:23.791936 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:53:23.791946 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:23.791955 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:23.791970 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:23.791980 | orchestrator | 2025-08-29 14:53:23.791990 | orchestrator | 2025-08-29 14:53:23.792000 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:53:23.792009 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:05.053) 0:00:32.160 ********* 2025-08-29 14:53:23.792019 | orchestrator | =============================================================================== 2025-08-29 14:53:23.792035 | orchestrator | redis : Restart redis container ---------------------------------------- 13.14s 2025-08-29 14:53:23.792051 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.05s 2025-08-29 14:53:23.792066 | orchestrator | redis : Copying over redis config files --------------------------------- 3.21s 2025-08-29 14:53:23.792082 | orchestrator | redis : Copying over default config.json files -------------------------- 3.17s 2025-08-29 14:53:23.792097 | orchestrator | redis : Check redis containers ------------------------------------------ 2.47s 2025-08-29 14:53:23.792113 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.55s 2025-08-29 14:53:23.792128 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.18s 2025-08-29 14:53:23.792144 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2025-08-29 14:53:23.792160 | orchestrator | redis : include_tasks --------------------------------------------------- 0.63s 2025-08-29 14:53:23.792178 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-08-29 14:53:23.792195 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:23.793421 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:23.793905 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:23.794579 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:23.795271 | orchestrator | 2025-08-29 14:53:23 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:23.795297 | orchestrator | 2025-08-29 14:53:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:26.852116 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:26.852414 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:26.852872 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:26.853496 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:26.860442 | orchestrator | 2025-08-29 14:53:26 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:26.860597 | orchestrator | 2025-08-29 14:53:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:29.964178 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:29.965689 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:29.969306 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:29.975735 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:29.977345 | orchestrator | 2025-08-29 14:53:29 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:29.977443 | orchestrator | 2025-08-29 14:53:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:33.081056 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:33.081125 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:33.081131 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:33.081136 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:33.081140 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:33.081145 | orchestrator | 2025-08-29 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:36.104393 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:36.106597 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:36.107017 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:36.107703 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:36.111561 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:36.111629 | orchestrator | 2025-08-29 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:39.140815 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:39.141343 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:39.142143 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:39.142896 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state STARTED 2025-08-29 14:53:39.143818 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:39.143851 | orchestrator | 2025-08-29 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:42.170420 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:42.170504 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:42.171300 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:42.173870 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 5be29207-1409-485f-9e0a-e99689a2b0b9 is in state STARTED 2025-08-29 14:53:42.174736 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 5543d4e5-01fc-4c34-8879-ed7bc59e15b4 is in state SUCCESS 2025-08-29 14:53:42.177276 | orchestrator | 2025-08-29 14:53:42.177334 | orchestrator | 2025-08-29 14:53:42.177346 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 14:53:42.177358 | orchestrator | 2025-08-29 14:53:42.177369 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 14:53:42.177383 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.194) 0:00:00.194 ********* 2025-08-29 14:53:42.177394 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.177406 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.177416 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.177427 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.177437 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.177447 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.177456 | orchestrator | 2025-08-29 14:53:42.177465 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 14:53:42.177475 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:00.729) 0:00:00.924 ********* 2025-08-29 14:53:42.177486 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.177497 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.177509 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.177520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.177529 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.177539 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.177549 | orchestrator | 2025-08-29 14:53:42.177559 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 14:53:42.177570 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:00.638) 0:00:01.562 ********* 2025-08-29 14:53:42.177581 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.177591 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.177602 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.177609 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.177615 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.177622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.177628 | orchestrator | 2025-08-29 14:53:42.177634 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 14:53:42.177644 | orchestrator | Friday 29 August 2025 14:49:51 +0000 (0:00:00.690) 0:00:02.253 ********* 2025-08-29 14:53:42.177653 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.177664 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.177674 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.177684 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.177694 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.177703 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.177712 | orchestrator | 2025-08-29 14:53:42.177722 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 14:53:42.177732 | orchestrator | Friday 29 August 2025 14:49:53 +0000 (0:00:02.125) 0:00:04.378 ********* 2025-08-29 14:53:42.177742 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.177753 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.177763 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.177775 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.177786 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.177797 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.177806 | orchestrator | 2025-08-29 14:53:42.177818 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 14:53:42.177855 | orchestrator | Friday 29 August 2025 14:49:54 +0000 (0:00:01.118) 0:00:05.496 ********* 2025-08-29 14:53:42.177867 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.177878 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.177888 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.177898 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.177918 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.177974 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.177983 | orchestrator | 2025-08-29 14:53:42.177990 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 14:53:42.177997 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:00.807) 0:00:06.304 ********* 2025-08-29 14:53:42.178004 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.178090 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.178101 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.178108 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.178115 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.178122 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.178129 | orchestrator | 2025-08-29 14:53:42.178136 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 14:53:42.178143 | orchestrator | Friday 29 August 2025 14:49:56 +0000 (0:00:00.576) 0:00:06.880 ********* 2025-08-29 14:53:42.178150 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.178157 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.178188 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.178195 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.178201 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.178225 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.178231 | orchestrator | 2025-08-29 14:53:42.178237 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 14:53:42.178244 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:01.203) 0:00:08.084 ********* 2025-08-29 14:53:42.178250 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:53:42.178257 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:53:42.178263 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:53:42.178269 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:53:42.178275 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.178281 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:53:42.178288 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:53:42.178294 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.178300 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:53:42.178306 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:53:42.178327 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.178334 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:53:42.178340 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:53:42.178346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.178352 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.178359 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:53:42.178365 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:53:42.178371 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.178377 | orchestrator | 2025-08-29 14:53:42.178383 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 14:53:42.178390 | orchestrator | Friday 29 August 2025 14:49:58 +0000 (0:00:00.799) 0:00:08.883 ********* 2025-08-29 14:53:42.178405 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.178412 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.178418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.178424 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.178430 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.178436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.178442 | orchestrator | 2025-08-29 14:53:42.178449 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 14:53:42.178457 | orchestrator | Friday 29 August 2025 14:49:59 +0000 (0:00:01.535) 0:00:10.418 ********* 2025-08-29 14:53:42.178463 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.178470 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.178476 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.178482 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.178488 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.178494 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.178500 | orchestrator | 2025-08-29 14:53:42.178506 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 14:53:42.178513 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:01.251) 0:00:11.669 ********* 2025-08-29 14:53:42.178519 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.178525 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.178531 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.178537 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.178543 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.178549 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.178555 | orchestrator | 2025-08-29 14:53:42.178561 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 14:53:42.178568 | orchestrator | Friday 29 August 2025 14:50:07 +0000 (0:00:06.138) 0:00:17.808 ********* 2025-08-29 14:53:42.178574 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.178580 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.178586 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.178592 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.178598 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.178604 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.178610 | orchestrator | 2025-08-29 14:53:42.178616 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 14:53:42.178622 | orchestrator | Friday 29 August 2025 14:50:08 +0000 (0:00:00.938) 0:00:18.746 ********* 2025-08-29 14:53:42.178628 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.178635 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.178641 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.178647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.178658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.178664 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.178670 | orchestrator | 2025-08-29 14:53:42.178677 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 14:53:42.178684 | orchestrator | Friday 29 August 2025 14:50:10 +0000 (0:00:02.217) 0:00:20.964 ********* 2025-08-29 14:53:42.178691 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.178697 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.178703 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.178709 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.178715 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.178721 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.178727 | orchestrator | 2025-08-29 14:53:42.178733 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 14:53:42.178740 | orchestrator | Friday 29 August 2025 14:50:11 +0000 (0:00:01.523) 0:00:22.487 ********* 2025-08-29 14:53:42.178746 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 14:53:42.178753 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 14:53:42.178763 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 14:53:42.178770 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 14:53:42.178776 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 14:53:42.178782 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 14:53:42.178788 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 14:53:42.178794 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 14:53:42.178800 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 14:53:42.178806 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 14:53:42.178812 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 14:53:42.178818 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 14:53:42.178824 | orchestrator | 2025-08-29 14:53:42.178831 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 14:53:42.178837 | orchestrator | Friday 29 August 2025 14:50:13 +0000 (0:00:02.105) 0:00:24.593 ********* 2025-08-29 14:53:42.178843 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.178849 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.178855 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.178861 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.178867 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.178874 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.178880 | orchestrator | 2025-08-29 14:53:42.178891 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 14:53:42.178897 | orchestrator | 2025-08-29 14:53:42.178903 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 14:53:42.178910 | orchestrator | Friday 29 August 2025 14:50:15 +0000 (0:00:01.807) 0:00:26.401 ********* 2025-08-29 14:53:42.178916 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.178922 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.178928 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.178934 | orchestrator | 2025-08-29 14:53:42.178945 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 14:53:42.178955 | orchestrator | Friday 29 August 2025 14:50:17 +0000 (0:00:01.935) 0:00:28.337 ********* 2025-08-29 14:53:42.178965 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.178975 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.178985 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.178993 | orchestrator | 2025-08-29 14:53:42.179003 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 14:53:42.179013 | orchestrator | Friday 29 August 2025 14:50:19 +0000 (0:00:01.990) 0:00:30.327 ********* 2025-08-29 14:53:42.179023 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.179033 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.179043 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.179053 | orchestrator | 2025-08-29 14:53:42.179064 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 14:53:42.179074 | orchestrator | Friday 29 August 2025 14:50:20 +0000 (0:00:01.087) 0:00:31.414 ********* 2025-08-29 14:53:42.179087 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.179097 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.179107 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.179116 | orchestrator | 2025-08-29 14:53:42.179126 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 14:53:42.179135 | orchestrator | Friday 29 August 2025 14:50:21 +0000 (0:00:01.111) 0:00:32.525 ********* 2025-08-29 14:53:42.179145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.179155 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.179166 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.179177 | orchestrator | 2025-08-29 14:53:42.179187 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 14:53:42.179197 | orchestrator | Friday 29 August 2025 14:50:22 +0000 (0:00:00.385) 0:00:32.911 ********* 2025-08-29 14:53:42.179266 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.179277 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.179287 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.179297 | orchestrator | 2025-08-29 14:53:42.179307 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 14:53:42.179316 | orchestrator | Friday 29 August 2025 14:50:23 +0000 (0:00:00.871) 0:00:33.782 ********* 2025-08-29 14:53:42.179326 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.179336 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.179345 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.179355 | orchestrator | 2025-08-29 14:53:42.179364 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 14:53:42.179374 | orchestrator | Friday 29 August 2025 14:50:24 +0000 (0:00:01.556) 0:00:35.338 ********* 2025-08-29 14:53:42.179383 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:53:42.179392 | orchestrator | 2025-08-29 14:53:42.179402 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 14:53:42.179419 | orchestrator | Friday 29 August 2025 14:50:25 +0000 (0:00:00.810) 0:00:36.149 ********* 2025-08-29 14:53:42.179431 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.179442 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.179453 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.179464 | orchestrator | 2025-08-29 14:53:42.179475 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 14:53:42.179485 | orchestrator | Friday 29 August 2025 14:50:28 +0000 (0:00:02.762) 0:00:38.911 ********* 2025-08-29 14:53:42.179495 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.179505 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.179515 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.179524 | orchestrator | 2025-08-29 14:53:42.179534 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 14:53:42.179544 | orchestrator | Friday 29 August 2025 14:50:29 +0000 (0:00:01.364) 0:00:40.276 ********* 2025-08-29 14:53:42.179555 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.179566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.179576 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.179586 | orchestrator | 2025-08-29 14:53:42.179596 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 14:53:42.179606 | orchestrator | Friday 29 August 2025 14:50:30 +0000 (0:00:01.247) 0:00:41.523 ********* 2025-08-29 14:53:42.179616 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.179626 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.179636 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.179646 | orchestrator | 2025-08-29 14:53:42.179656 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 14:53:42.179666 | orchestrator | Friday 29 August 2025 14:50:32 +0000 (0:00:01.859) 0:00:43.383 ********* 2025-08-29 14:53:42.179676 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.179686 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.179696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.179706 | orchestrator | 2025-08-29 14:53:42.179716 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 14:53:42.179726 | orchestrator | Friday 29 August 2025 14:50:33 +0000 (0:00:00.700) 0:00:44.083 ********* 2025-08-29 14:53:42.179736 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.179746 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.179756 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.179766 | orchestrator | 2025-08-29 14:53:42.179777 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 14:53:42.179787 | orchestrator | Friday 29 August 2025 14:50:34 +0000 (0:00:01.255) 0:00:45.338 ********* 2025-08-29 14:53:42.179797 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.179807 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.179826 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.179836 | orchestrator | 2025-08-29 14:53:42.179859 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 14:53:42.179870 | orchestrator | Friday 29 August 2025 14:50:36 +0000 (0:00:01.830) 0:00:47.169 ********* 2025-08-29 14:53:42.179882 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:53:42.179899 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:53:42.179912 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:53:42.179922 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:53:42.179932 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:53:42.179942 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:53:42.179952 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:53:42.179962 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:53:42.179974 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:53:42.179984 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:53:42.179994 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:53:42.180004 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:53:42.180015 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 14:53:42.180026 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 14:53:42.180036 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 14:53:42.180052 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.180063 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.180073 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.180083 | orchestrator | 2025-08-29 14:53:42.180094 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 14:53:42.180105 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:55.471) 0:01:42.641 ********* 2025-08-29 14:53:42.180115 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.180125 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.180135 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.180144 | orchestrator | 2025-08-29 14:53:42.180154 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 14:53:42.180164 | orchestrator | Friday 29 August 2025 14:51:32 +0000 (0:00:00.329) 0:01:42.970 ********* 2025-08-29 14:53:42.180174 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180185 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180195 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180226 | orchestrator | 2025-08-29 14:53:42.180238 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 14:53:42.180257 | orchestrator | Friday 29 August 2025 14:51:33 +0000 (0:00:01.224) 0:01:44.195 ********* 2025-08-29 14:53:42.180267 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180277 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180287 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180297 | orchestrator | 2025-08-29 14:53:42.180306 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 14:53:42.180316 | orchestrator | Friday 29 August 2025 14:51:35 +0000 (0:00:01.497) 0:01:45.693 ********* 2025-08-29 14:53:42.180326 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180336 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180348 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180358 | orchestrator | 2025-08-29 14:53:42.180368 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 14:53:42.180379 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:24.549) 0:02:10.242 ********* 2025-08-29 14:53:42.180389 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.180399 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.180408 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.180418 | orchestrator | 2025-08-29 14:53:42.180428 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 14:53:42.180438 | orchestrator | Friday 29 August 2025 14:52:00 +0000 (0:00:00.766) 0:02:11.008 ********* 2025-08-29 14:53:42.180448 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.180458 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.180468 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.180478 | orchestrator | 2025-08-29 14:53:42.180496 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 14:53:42.180506 | orchestrator | Friday 29 August 2025 14:52:01 +0000 (0:00:01.065) 0:02:12.074 ********* 2025-08-29 14:53:42.180517 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180527 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180540 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180551 | orchestrator | 2025-08-29 14:53:42.180562 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 14:53:42.180573 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:00.739) 0:02:12.813 ********* 2025-08-29 14:53:42.180584 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.180595 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.180605 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.180616 | orchestrator | 2025-08-29 14:53:42.180627 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 14:53:42.180638 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:00.772) 0:02:13.586 ********* 2025-08-29 14:53:42.180648 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.180658 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.180668 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.180679 | orchestrator | 2025-08-29 14:53:42.180688 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 14:53:42.180698 | orchestrator | Friday 29 August 2025 14:52:03 +0000 (0:00:00.350) 0:02:13.937 ********* 2025-08-29 14:53:42.180709 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180719 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180729 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180739 | orchestrator | 2025-08-29 14:53:42.180876 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 14:53:42.180891 | orchestrator | Friday 29 August 2025 14:52:04 +0000 (0:00:00.858) 0:02:14.795 ********* 2025-08-29 14:53:42.180903 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180914 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180924 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180934 | orchestrator | 2025-08-29 14:53:42.180944 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 14:53:42.180950 | orchestrator | Friday 29 August 2025 14:52:04 +0000 (0:00:00.663) 0:02:15.459 ********* 2025-08-29 14:53:42.180966 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.180972 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.180979 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.180985 | orchestrator | 2025-08-29 14:53:42.180991 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 14:53:42.180997 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:00.936) 0:02:16.395 ********* 2025-08-29 14:53:42.181003 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:53:42.181010 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:53:42.181016 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:53:42.181022 | orchestrator | 2025-08-29 14:53:42.181028 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 14:53:42.181034 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:00.789) 0:02:17.185 ********* 2025-08-29 14:53:42.181040 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.181046 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.181053 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.181059 | orchestrator | 2025-08-29 14:53:42.181065 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 14:53:42.181071 | orchestrator | Friday 29 August 2025 14:52:07 +0000 (0:00:00.518) 0:02:17.703 ********* 2025-08-29 14:53:42.181077 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.181089 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.181095 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.181101 | orchestrator | 2025-08-29 14:53:42.181107 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 14:53:42.181114 | orchestrator | Friday 29 August 2025 14:52:07 +0000 (0:00:00.352) 0:02:18.056 ********* 2025-08-29 14:53:42.181120 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.181126 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.181132 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.181138 | orchestrator | 2025-08-29 14:53:42.181144 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 14:53:42.181151 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:00.675) 0:02:18.732 ********* 2025-08-29 14:53:42.181157 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.181163 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.181169 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.181175 | orchestrator | 2025-08-29 14:53:42.181182 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 14:53:42.181188 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:00.658) 0:02:19.390 ********* 2025-08-29 14:53:42.181195 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:53:42.181201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:53:42.181226 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:53:42.181232 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:53:42.181239 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:53:42.181245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:53:42.181251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:53:42.181258 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:53:42.181264 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:53:42.181279 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 14:53:42.181285 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:53:42.181296 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:53:42.181302 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 14:53:42.181308 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:53:42.181314 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:53:42.181320 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:53:42.181326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:53:42.181333 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:53:42.181339 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:53:42.181345 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:53:42.181351 | orchestrator | 2025-08-29 14:53:42.181373 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 14:53:42.181379 | orchestrator | 2025-08-29 14:53:42.181385 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 14:53:42.181392 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:03.487) 0:02:22.877 ********* 2025-08-29 14:53:42.181398 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.181404 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.181410 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.181416 | orchestrator | 2025-08-29 14:53:42.181422 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 14:53:42.181429 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:00.328) 0:02:23.205 ********* 2025-08-29 14:53:42.181435 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.181441 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.181447 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.181453 | orchestrator | 2025-08-29 14:53:42.181459 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 14:53:42.181466 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:00.674) 0:02:23.880 ********* 2025-08-29 14:53:42.181472 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.181478 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.181484 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.181490 | orchestrator | 2025-08-29 14:53:42.181496 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 14:53:42.181502 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:00.344) 0:02:24.224 ********* 2025-08-29 14:53:42.181509 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:53:42.181515 | orchestrator | 2025-08-29 14:53:42.181521 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 14:53:42.181527 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:00.678) 0:02:24.902 ********* 2025-08-29 14:53:42.181538 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.181544 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.181550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.181560 | orchestrator | 2025-08-29 14:53:42.181570 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 14:53:42.181580 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:00.302) 0:02:25.205 ********* 2025-08-29 14:53:42.181590 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.181599 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.181609 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.181620 | orchestrator | 2025-08-29 14:53:42.181630 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 14:53:42.181655 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:00.296) 0:02:25.502 ********* 2025-08-29 14:53:42.181665 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.181676 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.181683 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.181689 | orchestrator | 2025-08-29 14:53:42.181695 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 14:53:42.181701 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.540) 0:02:26.042 ********* 2025-08-29 14:53:42.181707 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.181713 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.181719 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.181725 | orchestrator | 2025-08-29 14:53:42.181731 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 14:53:42.181737 | orchestrator | Friday 29 August 2025 14:52:16 +0000 (0:00:00.662) 0:02:26.705 ********* 2025-08-29 14:53:42.181743 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.181749 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.181755 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.181761 | orchestrator | 2025-08-29 14:53:42.181767 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 14:53:42.181773 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:01.127) 0:02:27.833 ********* 2025-08-29 14:53:42.181779 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.181785 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.181791 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.181797 | orchestrator | 2025-08-29 14:53:42.181803 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 14:53:42.181809 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:01.248) 0:02:29.081 ********* 2025-08-29 14:53:42.181815 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:53:42.181821 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:53:42.181827 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:53:42.181833 | orchestrator | 2025-08-29 14:53:42.181845 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:53:42.181852 | orchestrator | 2025-08-29 14:53:42.181858 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:53:42.181864 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:11.663) 0:02:40.745 ********* 2025-08-29 14:53:42.181870 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.181876 | orchestrator | 2025-08-29 14:53:42.181882 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:53:42.181888 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:00.862) 0:02:41.607 ********* 2025-08-29 14:53:42.181894 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.181900 | orchestrator | 2025-08-29 14:53:42.181906 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:53:42.181912 | orchestrator | Friday 29 August 2025 14:52:31 +0000 (0:00:00.427) 0:02:42.035 ********* 2025-08-29 14:53:42.181918 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:53:42.181925 | orchestrator | 2025-08-29 14:53:42.181931 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:53:42.181937 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:00.627) 0:02:42.662 ********* 2025-08-29 14:53:42.181943 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.181949 | orchestrator | 2025-08-29 14:53:42.181955 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:53:42.181961 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:00.878) 0:02:43.541 ********* 2025-08-29 14:53:42.181967 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.181973 | orchestrator | 2025-08-29 14:53:42.181980 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:53:42.181986 | orchestrator | Friday 29 August 2025 14:52:34 +0000 (0:00:01.171) 0:02:44.712 ********* 2025-08-29 14:53:42.181999 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:53:42.182006 | orchestrator | 2025-08-29 14:53:42.182074 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:53:42.182081 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:01.761) 0:02:46.474 ********* 2025-08-29 14:53:42.182087 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:53:42.182093 | orchestrator | 2025-08-29 14:53:42.182099 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:53:42.182106 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:00.929) 0:02:47.404 ********* 2025-08-29 14:53:42.182112 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.182118 | orchestrator | 2025-08-29 14:53:42.182125 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:53:42.182131 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.438) 0:02:47.843 ********* 2025-08-29 14:53:42.182137 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.182143 | orchestrator | 2025-08-29 14:53:42.182150 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 14:53:42.182156 | orchestrator | 2025-08-29 14:53:42.182162 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 14:53:42.182168 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.474) 0:02:48.317 ********* 2025-08-29 14:53:42.182175 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.182181 | orchestrator | 2025-08-29 14:53:42.182187 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 14:53:42.182198 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.160) 0:02:48.478 ********* 2025-08-29 14:53:42.182224 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:53:42.182231 | orchestrator | 2025-08-29 14:53:42.182237 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 14:53:42.182243 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.248) 0:02:48.726 ********* 2025-08-29 14:53:42.182250 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.182256 | orchestrator | 2025-08-29 14:53:42.182262 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 14:53:42.182268 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.799) 0:02:49.526 ********* 2025-08-29 14:53:42.182274 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.182281 | orchestrator | 2025-08-29 14:53:42.182287 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 14:53:42.182293 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:02.116) 0:02:51.643 ********* 2025-08-29 14:53:42.182299 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.182305 | orchestrator | 2025-08-29 14:53:42.182311 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 14:53:42.182317 | orchestrator | Friday 29 August 2025 14:52:41 +0000 (0:00:00.764) 0:02:52.408 ********* 2025-08-29 14:53:42.182323 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.182330 | orchestrator | 2025-08-29 14:53:42.182336 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 14:53:42.182342 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:00.451) 0:02:52.859 ********* 2025-08-29 14:53:42.182348 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.182354 | orchestrator | 2025-08-29 14:53:42.182360 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 14:53:42.182367 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:09.120) 0:03:01.980 ********* 2025-08-29 14:53:42.182373 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.182379 | orchestrator | 2025-08-29 14:53:42.182385 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 14:53:42.182391 | orchestrator | Friday 29 August 2025 14:53:04 +0000 (0:00:13.237) 0:03:15.217 ********* 2025-08-29 14:53:42.182397 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.182403 | orchestrator | 2025-08-29 14:53:42.182416 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 14:53:42.182422 | orchestrator | 2025-08-29 14:53:42.182428 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 14:53:42.182440 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:00.786) 0:03:16.004 ********* 2025-08-29 14:53:42.182446 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.182453 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.182459 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.182465 | orchestrator | 2025-08-29 14:53:42.182471 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 14:53:42.182477 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:00.510) 0:03:16.514 ********* 2025-08-29 14:53:42.182484 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182490 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.182498 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.182508 | orchestrator | 2025-08-29 14:53:42.182518 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 14:53:42.182528 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:00.560) 0:03:17.075 ********* 2025-08-29 14:53:42.182538 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:53:42.182549 | orchestrator | 2025-08-29 14:53:42.182558 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 14:53:42.182568 | orchestrator | Friday 29 August 2025 14:53:07 +0000 (0:00:00.608) 0:03:17.683 ********* 2025-08-29 14:53:42.182578 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182587 | orchestrator | 2025-08-29 14:53:42.182597 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 14:53:42.182606 | orchestrator | Friday 29 August 2025 14:53:07 +0000 (0:00:00.194) 0:03:17.877 ********* 2025-08-29 14:53:42.182615 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182625 | orchestrator | 2025-08-29 14:53:42.182634 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 14:53:42.182644 | orchestrator | Friday 29 August 2025 14:53:07 +0000 (0:00:00.172) 0:03:18.049 ********* 2025-08-29 14:53:42.182654 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182664 | orchestrator | 2025-08-29 14:53:42.182674 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 14:53:42.182684 | orchestrator | Friday 29 August 2025 14:53:07 +0000 (0:00:00.226) 0:03:18.276 ********* 2025-08-29 14:53:42.182694 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182706 | orchestrator | 2025-08-29 14:53:42.182716 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 14:53:42.182726 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:00.937) 0:03:19.214 ********* 2025-08-29 14:53:42.182736 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182746 | orchestrator | 2025-08-29 14:53:42.182756 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 14:53:42.182767 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:00.222) 0:03:19.437 ********* 2025-08-29 14:53:42.182777 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182787 | orchestrator | 2025-08-29 14:53:42.182797 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 14:53:42.182808 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.294) 0:03:19.732 ********* 2025-08-29 14:53:42.182819 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182829 | orchestrator | 2025-08-29 14:53:42.182840 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 14:53:42.182851 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.181) 0:03:19.913 ********* 2025-08-29 14:53:42.182861 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182871 | orchestrator | 2025-08-29 14:53:42.182891 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 14:53:42.182917 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.190) 0:03:20.103 ********* 2025-08-29 14:53:42.182927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.182938 | orchestrator | 2025-08-29 14:53:42.182949 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 14:53:42.182959 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.183) 0:03:20.286 ********* 2025-08-29 14:53:42.182970 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 14:53:42.182981 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 14:53:42.182991 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183003 | orchestrator | 2025-08-29 14:53:42.183013 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 14:53:42.183024 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.315) 0:03:20.602 ********* 2025-08-29 14:53:42.183034 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183045 | orchestrator | 2025-08-29 14:53:42.183056 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 14:53:42.183067 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:00.216) 0:03:20.819 ********* 2025-08-29 14:53:42.183077 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183087 | orchestrator | 2025-08-29 14:53:42.183097 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 14:53:42.183107 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:00.210) 0:03:21.029 ********* 2025-08-29 14:53:42.183118 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183128 | orchestrator | 2025-08-29 14:53:42.183138 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 14:53:42.183149 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:00.187) 0:03:21.216 ********* 2025-08-29 14:53:42.183159 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183169 | orchestrator | 2025-08-29 14:53:42.183181 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 14:53:42.183191 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:00.178) 0:03:21.395 ********* 2025-08-29 14:53:42.183201 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183261 | orchestrator | 2025-08-29 14:53:42.183273 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 14:53:42.183283 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:00.161) 0:03:21.557 ********* 2025-08-29 14:53:42.183294 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183304 | orchestrator | 2025-08-29 14:53:42.183314 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 14:53:42.183332 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.497) 0:03:22.054 ********* 2025-08-29 14:53:42.183338 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183345 | orchestrator | 2025-08-29 14:53:42.183351 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 14:53:42.183357 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.189) 0:03:22.244 ********* 2025-08-29 14:53:42.183363 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183369 | orchestrator | 2025-08-29 14:53:42.183375 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 14:53:42.183381 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.211) 0:03:22.456 ********* 2025-08-29 14:53:42.183387 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183394 | orchestrator | 2025-08-29 14:53:42.183400 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 14:53:42.183406 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.190) 0:03:22.647 ********* 2025-08-29 14:53:42.183412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183418 | orchestrator | 2025-08-29 14:53:42.183424 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 14:53:42.183430 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:00.201) 0:03:22.848 ********* 2025-08-29 14:53:42.183445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183451 | orchestrator | 2025-08-29 14:53:42.183457 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 14:53:42.183463 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:00.199) 0:03:23.048 ********* 2025-08-29 14:53:42.183469 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 14:53:42.183475 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 14:53:42.183481 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 14:53:42.183488 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 14:53:42.183494 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183500 | orchestrator | 2025-08-29 14:53:42.183506 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 14:53:42.183512 | orchestrator | Friday 29 August 2025 14:53:12 +0000 (0:00:00.443) 0:03:23.492 ********* 2025-08-29 14:53:42.183518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183524 | orchestrator | 2025-08-29 14:53:42.183530 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 14:53:42.183535 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.206) 0:03:23.698 ********* 2025-08-29 14:53:42.183540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183546 | orchestrator | 2025-08-29 14:53:42.183551 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 14:53:42.183556 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.241) 0:03:23.939 ********* 2025-08-29 14:53:42.183562 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183567 | orchestrator | 2025-08-29 14:53:42.183572 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 14:53:42.183578 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.189) 0:03:24.128 ********* 2025-08-29 14:53:42.183583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183588 | orchestrator | 2025-08-29 14:53:42.183594 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 14:53:42.183600 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.225) 0:03:24.353 ********* 2025-08-29 14:53:42.183605 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 14:53:42.183611 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 14:53:42.183616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183622 | orchestrator | 2025-08-29 14:53:42.183627 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 14:53:42.183633 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:00.433) 0:03:24.787 ********* 2025-08-29 14:53:42.183638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.183644 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.183649 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.183655 | orchestrator | 2025-08-29 14:53:42.183660 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 14:53:42.183665 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:00.721) 0:03:25.509 ********* 2025-08-29 14:53:42.183670 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.183676 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.183681 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.183687 | orchestrator | 2025-08-29 14:53:42.183692 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 14:53:42.183697 | orchestrator | 2025-08-29 14:53:42.183703 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 14:53:42.183708 | orchestrator | Friday 29 August 2025 14:53:16 +0000 (0:00:01.654) 0:03:27.163 ********* 2025-08-29 14:53:42.183713 | orchestrator | ok: [testbed-manager] 2025-08-29 14:53:42.183721 | orchestrator | 2025-08-29 14:53:42.183730 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 14:53:42.183745 | orchestrator | Friday 29 August 2025 14:53:16 +0000 (0:00:00.145) 0:03:27.308 ********* 2025-08-29 14:53:42.183754 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:53:42.183763 | orchestrator | 2025-08-29 14:53:42.183771 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 14:53:42.183791 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:00.432) 0:03:27.741 ********* 2025-08-29 14:53:42.183800 | orchestrator | changed: [testbed-manager] 2025-08-29 14:53:42.183809 | orchestrator | 2025-08-29 14:53:42.183817 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 14:53:42.183826 | orchestrator | 2025-08-29 14:53:42.183835 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 14:53:42.183851 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:06.110) 0:03:33.851 ********* 2025-08-29 14:53:42.183860 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:53:42.183869 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:53:42.183876 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:53:42.183884 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:53:42.183892 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:53:42.183900 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:53:42.183908 | orchestrator | 2025-08-29 14:53:42.183917 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 14:53:42.183926 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.970) 0:03:34.822 ********* 2025-08-29 14:53:42.183936 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:53:42.183945 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:53:42.183954 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:53:42.183963 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:53:42.183972 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:53:42.183981 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:53:42.183991 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:53:42.183997 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:53:42.184003 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:53:42.184008 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:53:42.184014 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:53:42.184020 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:53:42.184025 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:53:42.184030 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:53:42.184035 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:53:42.184041 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:53:42.184046 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:53:42.184051 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:53:42.184057 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:53:42.184062 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:53:42.184067 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:53:42.184084 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:53:42.184089 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:53:42.184095 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:53:42.184100 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:53:42.184105 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:53:42.184111 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:53:42.184116 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:53:42.184121 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:53:42.184127 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:53:42.184132 | orchestrator | 2025-08-29 14:53:42.184137 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 14:53:42.184143 | orchestrator | Friday 29 August 2025 14:53:38 +0000 (0:00:14.693) 0:03:49.516 ********* 2025-08-29 14:53:42.184148 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.184153 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.184159 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.184164 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.184170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.184175 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.184184 | orchestrator | 2025-08-29 14:53:42.184193 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 14:53:42.184202 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:00.533) 0:03:50.049 ********* 2025-08-29 14:53:42.184256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:53:42.184266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:53:42.184275 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:53:42.184284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:53:42.184293 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:53:42.184301 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:53:42.184311 | orchestrator | 2025-08-29 14:53:42.184318 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:53:42.184329 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:53:42.184340 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 14:53:42.184346 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:53:42.184351 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:53:42.184357 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:53:42.184363 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:53:42.184368 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:53:42.184374 | orchestrator | 2025-08-29 14:53:42.184379 | orchestrator | 2025-08-29 14:53:42.184387 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:53:42.184396 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:00.511) 0:03:50.561 ********* 2025-08-29 14:53:42.184414 | orchestrator | =============================================================================== 2025-08-29 14:53:42.184423 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.47s 2025-08-29 14:53:42.184433 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.55s 2025-08-29 14:53:42.184442 | orchestrator | Manage labels ---------------------------------------------------------- 14.69s 2025-08-29 14:53:42.184452 | orchestrator | kubectl : Install required packages ------------------------------------ 13.24s 2025-08-29 14:53:42.184462 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.66s 2025-08-29 14:53:42.184472 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.12s 2025-08-29 14:53:42.184481 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.14s 2025-08-29 14:53:42.184491 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.11s 2025-08-29 14:53:42.184500 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.49s 2025-08-29 14:53:42.184509 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.76s 2025-08-29 14:53:42.184517 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.22s 2025-08-29 14:53:42.184525 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.12s 2025-08-29 14:53:42.184541 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.12s 2025-08-29 14:53:42.184550 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.11s 2025-08-29 14:53:42.184559 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.99s 2025-08-29 14:53:42.184568 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.94s 2025-08-29 14:53:42.184577 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.86s 2025-08-29 14:53:42.184585 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.83s 2025-08-29 14:53:42.184594 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.81s 2025-08-29 14:53:42.184603 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.76s 2025-08-29 14:53:42.184612 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 4e91cefe-a39e-42f0-b92c-80defeaa36e3 is in state STARTED 2025-08-29 14:53:42.184621 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:42.184631 | orchestrator | 2025-08-29 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:45.233469 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:45.233576 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:45.233591 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:45.233602 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 5be29207-1409-485f-9e0a-e99689a2b0b9 is in state STARTED 2025-08-29 14:53:45.233612 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 4e91cefe-a39e-42f0-b92c-80defeaa36e3 is in state STARTED 2025-08-29 14:53:45.238313 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:45.239289 | orchestrator | 2025-08-29 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:48.404496 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:48.404805 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:48.406101 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:48.406139 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 5be29207-1409-485f-9e0a-e99689a2b0b9 is in state STARTED 2025-08-29 14:53:48.408752 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 4e91cefe-a39e-42f0-b92c-80defeaa36e3 is in state SUCCESS 2025-08-29 14:53:48.408794 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:48.408804 | orchestrator | 2025-08-29 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:51.438685 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:51.439340 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:51.440651 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:51.441441 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 5be29207-1409-485f-9e0a-e99689a2b0b9 is in state STARTED 2025-08-29 14:53:51.443988 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:51.444026 | orchestrator | 2025-08-29 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:54.474977 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:54.476896 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:54.478869 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:54.479688 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 5be29207-1409-485f-9e0a-e99689a2b0b9 is in state SUCCESS 2025-08-29 14:53:54.480510 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:54.480543 | orchestrator | 2025-08-29 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:57.509554 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:53:57.510532 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:53:57.511108 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:53:57.514181 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:53:57.514296 | orchestrator | 2025-08-29 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:00.559255 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:54:00.561089 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:00.562272 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:00.562915 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:00.563063 | orchestrator | 2025-08-29 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:03.602473 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:54:03.602992 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:03.603186 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:03.604378 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:03.604459 | orchestrator | 2025-08-29 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:06.645707 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:54:06.646986 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:06.647634 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:06.649114 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:06.649183 | orchestrator | 2025-08-29 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:09.693663 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state STARTED 2025-08-29 14:54:09.696273 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:09.699418 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:09.700780 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:09.700820 | orchestrator | 2025-08-29 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:12.734648 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:12.734909 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 9e353190-ded5-436c-939e-a5e31f428ed7 is in state SUCCESS 2025-08-29 14:54:12.736583 | orchestrator | 2025-08-29 14:54:12.736619 | orchestrator | 2025-08-29 14:54:12.736631 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 14:54:12.736643 | orchestrator | 2025-08-29 14:54:12.736655 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:54:12.736667 | orchestrator | Friday 29 August 2025 14:53:43 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-08-29 14:54:12.736678 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:54:12.736690 | orchestrator | 2025-08-29 14:54:12.736701 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:54:12.736713 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:00.668) 0:00:00.805 ********* 2025-08-29 14:54:12.736724 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:12.736736 | orchestrator | 2025-08-29 14:54:12.736747 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 14:54:12.736759 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:01.163) 0:00:01.968 ********* 2025-08-29 14:54:12.736770 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:12.736781 | orchestrator | 2025-08-29 14:54:12.736792 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:12.736804 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:12.736817 | orchestrator | 2025-08-29 14:54:12.736828 | orchestrator | 2025-08-29 14:54:12.736840 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:12.736868 | orchestrator | Friday 29 August 2025 14:53:46 +0000 (0:00:00.502) 0:00:02.471 ********* 2025-08-29 14:54:12.736879 | orchestrator | =============================================================================== 2025-08-29 14:54:12.736914 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.16s 2025-08-29 14:54:12.736925 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2025-08-29 14:54:12.736936 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2025-08-29 14:54:12.736947 | orchestrator | 2025-08-29 14:54:12.736958 | orchestrator | 2025-08-29 14:54:12.736969 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:54:12.736979 | orchestrator | 2025-08-29 14:54:12.736990 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:54:12.737001 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:00.251) 0:00:00.251 ********* 2025-08-29 14:54:12.737012 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:12.737025 | orchestrator | 2025-08-29 14:54:12.737036 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:54:12.737047 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:01.098) 0:00:01.350 ********* 2025-08-29 14:54:12.737058 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:12.737068 | orchestrator | 2025-08-29 14:54:12.737079 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:54:12.737090 | orchestrator | Friday 29 August 2025 14:53:46 +0000 (0:00:00.509) 0:00:01.860 ********* 2025-08-29 14:54:12.737101 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:54:12.737112 | orchestrator | 2025-08-29 14:54:12.737123 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:54:12.737134 | orchestrator | Friday 29 August 2025 14:53:47 +0000 (0:00:00.697) 0:00:02.557 ********* 2025-08-29 14:54:12.737146 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:12.737158 | orchestrator | 2025-08-29 14:54:12.737170 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:54:12.737205 | orchestrator | Friday 29 August 2025 14:53:48 +0000 (0:00:01.175) 0:00:03.732 ********* 2025-08-29 14:54:12.737218 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:12.737231 | orchestrator | 2025-08-29 14:54:12.737243 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:54:12.737255 | orchestrator | Friday 29 August 2025 14:53:49 +0000 (0:00:00.947) 0:00:04.679 ********* 2025-08-29 14:54:12.737288 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:54:12.737300 | orchestrator | 2025-08-29 14:54:12.737312 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:54:12.737324 | orchestrator | Friday 29 August 2025 14:53:51 +0000 (0:00:02.065) 0:00:06.744 ********* 2025-08-29 14:54:12.737336 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:54:12.737348 | orchestrator | 2025-08-29 14:54:12.737359 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:54:12.737371 | orchestrator | Friday 29 August 2025 14:53:52 +0000 (0:00:00.911) 0:00:07.656 ********* 2025-08-29 14:54:12.737384 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:12.737396 | orchestrator | 2025-08-29 14:54:12.737407 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:54:12.737420 | orchestrator | Friday 29 August 2025 14:53:52 +0000 (0:00:00.406) 0:00:08.063 ********* 2025-08-29 14:54:12.737432 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:12.737445 | orchestrator | 2025-08-29 14:54:12.737457 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:12.737469 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:12.737482 | orchestrator | 2025-08-29 14:54:12.737494 | orchestrator | 2025-08-29 14:54:12.737506 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:12.737518 | orchestrator | Friday 29 August 2025 14:53:52 +0000 (0:00:00.314) 0:00:08.378 ********* 2025-08-29 14:54:12.737531 | orchestrator | =============================================================================== 2025-08-29 14:54:12.737550 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.07s 2025-08-29 14:54:12.737560 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.18s 2025-08-29 14:54:12.737571 | orchestrator | Get home directory of operator user ------------------------------------- 1.10s 2025-08-29 14:54:12.737595 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.95s 2025-08-29 14:54:12.737607 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.91s 2025-08-29 14:54:12.737618 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-08-29 14:54:12.737628 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2025-08-29 14:54:12.737639 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2025-08-29 14:54:12.737650 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-08-29 14:54:12.737660 | orchestrator | 2025-08-29 14:54:12.737671 | orchestrator | 2025-08-29 14:54:12.737682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:54:12.737693 | orchestrator | 2025-08-29 14:54:12.737703 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:54:12.737714 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-08-29 14:54:12.737725 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:12.737736 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:12.737747 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:12.737757 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:12.737768 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:12.737778 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:12.737789 | orchestrator | 2025-08-29 14:54:12.737800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:54:12.737810 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:01.012) 0:00:01.262 ********* 2025-08-29 14:54:12.737827 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:54:12.737839 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:54:12.737849 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:54:12.737860 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:54:12.737871 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:54:12.737894 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:54:12.737905 | orchestrator | 2025-08-29 14:54:12.737916 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 14:54:12.737926 | orchestrator | 2025-08-29 14:54:12.737937 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 14:54:12.737959 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:01.124) 0:00:02.386 ********* 2025-08-29 14:54:12.737971 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:12.737984 | orchestrator | 2025-08-29 14:54:12.737995 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:54:12.738005 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:01.722) 0:00:04.109 ********* 2025-08-29 14:54:12.738079 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:54:12.738095 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:54:12.738106 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:54:12.738117 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:54:12.738128 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:54:12.738139 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:54:12.738158 | orchestrator | 2025-08-29 14:54:12.738169 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:54:12.738180 | orchestrator | Friday 29 August 2025 14:52:56 +0000 (0:00:01.382) 0:00:05.492 ********* 2025-08-29 14:54:12.738207 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:54:12.738218 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:54:12.738229 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:54:12.738240 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:54:12.738250 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:54:12.738261 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:54:12.738272 | orchestrator | 2025-08-29 14:54:12.738283 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:54:12.738293 | orchestrator | Friday 29 August 2025 14:52:58 +0000 (0:00:02.031) 0:00:07.523 ********* 2025-08-29 14:54:12.738304 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 14:54:12.738315 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:12.738326 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 14:54:12.738336 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:12.738347 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 14:54:12.738358 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:12.738368 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 14:54:12.738379 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 14:54:12.738390 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:12.738400 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:12.738411 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 14:54:12.738421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:12.738432 | orchestrator | 2025-08-29 14:54:12.738443 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 14:54:12.738453 | orchestrator | Friday 29 August 2025 14:53:00 +0000 (0:00:02.070) 0:00:09.594 ********* 2025-08-29 14:54:12.738464 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:12.738475 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:12.738486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:12.738506 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:12.738517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:12.738528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:12.738539 | orchestrator | 2025-08-29 14:54:12.738550 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 14:54:12.738561 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:01.383) 0:00:10.977 ********* 2025-08-29 14:54:12.738575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738760 | orchestrator | 2025-08-29 14:54:12.738771 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 14:54:12.738783 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:03.910) 0:00:14.888 ********* 2025-08-29 14:54:12.738799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.738991 | orchestrator | 2025-08-29 14:54:12.739003 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 14:54:12.739014 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:03.761) 0:00:18.650 ********* 2025-08-29 14:54:12.739025 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:12.739036 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:12.739047 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:12.739058 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:12.739068 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:12.739079 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:12.739090 | orchestrator | 2025-08-29 14:54:12.739101 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 14:54:12.739112 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:01.131) 0:00:19.782 ********* 2025-08-29 14:54:12.739123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:54:12.739310 | orchestrator | 2025-08-29 14:54:12.739321 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:54:12.739332 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:02.158) 0:00:21.940 ********* 2025-08-29 14:54:12.739343 | orchestrator | 2025-08-29 14:54:12.739354 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:54:12.739365 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.185) 0:00:22.126 ********* 2025-08-29 14:54:12.739376 | orchestrator | 2025-08-29 14:54:12.740063 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:54:12.740081 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.293) 0:00:22.420 ********* 2025-08-29 14:54:12.740092 | orchestrator | 2025-08-29 14:54:12.740103 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:54:12.740119 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.163) 0:00:22.583 ********* 2025-08-29 14:54:12.740130 | orchestrator | 2025-08-29 14:54:12.740141 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:54:12.740151 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:00.143) 0:00:22.727 ********* 2025-08-29 14:54:12.740162 | orchestrator | 2025-08-29 14:54:12.740173 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:54:12.740252 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:00.128) 0:00:22.856 ********* 2025-08-29 14:54:12.740266 | orchestrator | 2025-08-29 14:54:12.740276 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 14:54:12.740287 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:00.609) 0:00:23.465 ********* 2025-08-29 14:54:12.740298 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:12.740319 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:12.740330 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:12.740340 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:12.740351 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:12.740362 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:12.740373 | orchestrator | 2025-08-29 14:54:12.740384 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 14:54:12.740395 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:21.110) 0:00:44.576 ********* 2025-08-29 14:54:12.740406 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:12.740417 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:12.740428 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:12.740439 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:12.740449 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:12.740460 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:12.740470 | orchestrator | 2025-08-29 14:54:12.740482 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:54:12.740503 | orchestrator | Friday 29 August 2025 14:53:37 +0000 (0:00:01.390) 0:00:45.966 ********* 2025-08-29 14:54:12.740514 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:12.740524 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:12.740535 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:12.740546 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:12.740557 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:12.740567 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:12.740578 | orchestrator | 2025-08-29 14:54:12.740589 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 14:54:12.740600 | orchestrator | Friday 29 August 2025 14:53:48 +0000 (0:00:11.020) 0:00:56.987 ********* 2025-08-29 14:54:12.740611 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 14:54:12.740623 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 14:54:12.740633 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 14:54:12.740644 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 14:54:12.740655 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 14:54:12.740666 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 14:54:12.740677 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 14:54:12.740688 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 14:54:12.740699 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 14:54:12.740710 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 14:54:12.740720 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 14:54:12.740731 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 14:54:12.740741 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:54:12.740752 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:54:12.740763 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:54:12.740781 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:54:12.740790 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:54:12.740800 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:54:12.740809 | orchestrator | 2025-08-29 14:54:12.740819 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 14:54:12.740835 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:07.566) 0:01:04.554 ********* 2025-08-29 14:54:12.740850 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 14:54:12.740867 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:12.740889 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 14:54:12.740903 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:12.740918 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 14:54:12.740932 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:12.740945 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 14:54:12.740959 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 14:54:12.740973 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 14:54:12.740987 | orchestrator | 2025-08-29 14:54:12.741002 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 14:54:12.741017 | orchestrator | Friday 29 August 2025 14:53:58 +0000 (0:00:02.785) 0:01:07.339 ********* 2025-08-29 14:54:12.741030 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:54:12.741044 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:12.741059 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:54:12.741073 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:12.741089 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:54:12.741106 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:12.741121 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:54:12.741138 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:54:12.741154 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:54:12.741170 | orchestrator | 2025-08-29 14:54:12.741180 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:54:12.741215 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:03.758) 0:01:11.098 ********* 2025-08-29 14:54:12.741225 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:12.741235 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:12.741255 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:12.741265 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:12.741275 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:12.741284 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:12.741294 | orchestrator | 2025-08-29 14:54:12.741304 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:12.741314 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:54:12.741325 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:54:12.741339 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:54:12.741357 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:54:12.741373 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:54:12.741402 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:54:12.741420 | orchestrator | 2025-08-29 14:54:12.741439 | orchestrator | 2025-08-29 14:54:12.741456 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:12.741470 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:07.862) 0:01:18.960 ********* 2025-08-29 14:54:12.741480 | orchestrator | =============================================================================== 2025-08-29 14:54:12.741490 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 21.11s 2025-08-29 14:54:12.741499 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.88s 2025-08-29 14:54:12.741509 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.57s 2025-08-29 14:54:12.741521 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.91s 2025-08-29 14:54:12.741538 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.76s 2025-08-29 14:54:12.741554 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.76s 2025-08-29 14:54:12.741569 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.79s 2025-08-29 14:54:12.741584 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.16s 2025-08-29 14:54:12.741600 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.07s 2025-08-29 14:54:12.741616 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.03s 2025-08-29 14:54:12.741633 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.72s 2025-08-29 14:54:12.741650 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.52s 2025-08-29 14:54:12.741664 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.39s 2025-08-29 14:54:12.741681 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.38s 2025-08-29 14:54:12.741690 | orchestrator | module-load : Load modules ---------------------------------------------- 1.38s 2025-08-29 14:54:12.741700 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.13s 2025-08-29 14:54:12.741709 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2025-08-29 14:54:12.741738 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-08-29 14:54:12.741749 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:12.741758 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:12.741768 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:12.741777 | orchestrator | 2025-08-29 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:15.771325 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:15.771716 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:15.771961 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:15.773565 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:15.773638 | orchestrator | 2025-08-29 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:18.808441 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:18.809401 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:18.810365 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:18.811659 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:18.811708 | orchestrator | 2025-08-29 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:21.846253 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:21.848013 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:21.848548 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:21.850400 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:21.850678 | orchestrator | 2025-08-29 14:54:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:24.887266 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:24.887472 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:24.888081 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:24.889109 | orchestrator | 2025-08-29 14:54:24 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:24.889141 | orchestrator | 2025-08-29 14:54:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:27.943051 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:27.943139 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:27.945113 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:27.945232 | orchestrator | 2025-08-29 14:54:27 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:27.945250 | orchestrator | 2025-08-29 14:54:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:31.107098 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:31.107292 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:31.107308 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:31.107320 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:31.107332 | orchestrator | 2025-08-29 14:54:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:34.114401 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:34.115594 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:34.116875 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:34.118492 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:34.118524 | orchestrator | 2025-08-29 14:54:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:37.151097 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:37.151366 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:37.152305 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:37.155083 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:37.155205 | orchestrator | 2025-08-29 14:54:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:40.195841 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:40.196351 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:40.198651 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:40.199581 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:40.200278 | orchestrator | 2025-08-29 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:43.231876 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:43.233505 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:43.235817 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:43.237794 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:43.237834 | orchestrator | 2025-08-29 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:46.276189 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:46.276332 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:46.276801 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:46.277476 | orchestrator | 2025-08-29 14:54:46 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:46.277550 | orchestrator | 2025-08-29 14:54:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:49.314380 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:49.315625 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:49.317723 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:49.319035 | orchestrator | 2025-08-29 14:54:49 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:49.319112 | orchestrator | 2025-08-29 14:54:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:52.354379 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:52.356355 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:52.357561 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:52.365212 | orchestrator | 2025-08-29 14:54:52 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:52.366806 | orchestrator | 2025-08-29 14:54:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:55.406411 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:55.406529 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:55.406786 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:55.407694 | orchestrator | 2025-08-29 14:54:55 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:55.407886 | orchestrator | 2025-08-29 14:54:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:58.441647 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:54:58.442089 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:54:58.442815 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:54:58.443768 | orchestrator | 2025-08-29 14:54:58 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:54:58.443926 | orchestrator | 2025-08-29 14:54:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:01.484864 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:01.486069 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:01.487815 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:01.489182 | orchestrator | 2025-08-29 14:55:01 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:01.489423 | orchestrator | 2025-08-29 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:04.539377 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:04.540094 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:04.541575 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:04.542745 | orchestrator | 2025-08-29 14:55:04 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:04.542775 | orchestrator | 2025-08-29 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:07.592304 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:07.592926 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:07.594452 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:07.595736 | orchestrator | 2025-08-29 14:55:07 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:07.595827 | orchestrator | 2025-08-29 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:10.649819 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:10.649940 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:10.651780 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:10.653250 | orchestrator | 2025-08-29 14:55:10 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:10.653280 | orchestrator | 2025-08-29 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:13.693561 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:13.696173 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:13.698648 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:13.701294 | orchestrator | 2025-08-29 14:55:13 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:13.701963 | orchestrator | 2025-08-29 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:16.741595 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:16.742104 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:16.744239 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:16.746118 | orchestrator | 2025-08-29 14:55:16 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:16.746703 | orchestrator | 2025-08-29 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:19.790812 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:19.791847 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:19.795398 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:19.798584 | orchestrator | 2025-08-29 14:55:19 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:19.798647 | orchestrator | 2025-08-29 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:22.850512 | orchestrator | 2025-08-29 14:55:22 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:22.854698 | orchestrator | 2025-08-29 14:55:22 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:22.858265 | orchestrator | 2025-08-29 14:55:22 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:22.860682 | orchestrator | 2025-08-29 14:55:22 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:22.860831 | orchestrator | 2025-08-29 14:55:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:25.896651 | orchestrator | 2025-08-29 14:55:25 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:25.898497 | orchestrator | 2025-08-29 14:55:25 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:25.899279 | orchestrator | 2025-08-29 14:55:25 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:25.902231 | orchestrator | 2025-08-29 14:55:25 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:25.902278 | orchestrator | 2025-08-29 14:55:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:28.935715 | orchestrator | 2025-08-29 14:55:28 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:28.935824 | orchestrator | 2025-08-29 14:55:28 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:28.936528 | orchestrator | 2025-08-29 14:55:28 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:28.937724 | orchestrator | 2025-08-29 14:55:28 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:28.937930 | orchestrator | 2025-08-29 14:55:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:31.983518 | orchestrator | 2025-08-29 14:55:31 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:31.986094 | orchestrator | 2025-08-29 14:55:31 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:31.986943 | orchestrator | 2025-08-29 14:55:31 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:31.988095 | orchestrator | 2025-08-29 14:55:31 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:31.988288 | orchestrator | 2025-08-29 14:55:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:35.030295 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:35.030383 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:35.031102 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:35.032032 | orchestrator | 2025-08-29 14:55:35 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state STARTED 2025-08-29 14:55:35.032067 | orchestrator | 2025-08-29 14:55:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:38.074587 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:38.074777 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:38.075471 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:38.076386 | orchestrator | 2025-08-29 14:55:38 | INFO  | Task 447a2901-5c14-406c-9458-ad2be17aefb0 is in state SUCCESS 2025-08-29 14:55:38.077167 | orchestrator | 2025-08-29 14:55:38.077190 | orchestrator | 2025-08-29 14:55:38.077198 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 14:55:38.077205 | orchestrator | 2025-08-29 14:55:38.077212 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 14:55:38.077219 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.119) 0:00:00.119 ********* 2025-08-29 14:55:38.077226 | orchestrator | ok: [localhost] => { 2025-08-29 14:55:38.077235 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 14:55:38.077242 | orchestrator | } 2025-08-29 14:55:38.077249 | orchestrator | 2025-08-29 14:55:38.077256 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 14:55:38.077262 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:00.045) 0:00:00.164 ********* 2025-08-29 14:55:38.077270 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 14:55:38.077277 | orchestrator | ...ignoring 2025-08-29 14:55:38.077284 | orchestrator | 2025-08-29 14:55:38.077290 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 14:55:38.077318 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:02.978) 0:00:03.143 ********* 2025-08-29 14:55:38.077324 | orchestrator | skipping: [localhost] 2025-08-29 14:55:38.077331 | orchestrator | 2025-08-29 14:55:38.077337 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 14:55:38.077343 | orchestrator | Friday 29 August 2025 14:53:15 +0000 (0:00:00.226) 0:00:03.370 ********* 2025-08-29 14:55:38.077349 | orchestrator | ok: [localhost] 2025-08-29 14:55:38.077355 | orchestrator | 2025-08-29 14:55:38.077361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:55:38.077367 | orchestrator | 2025-08-29 14:55:38.077374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:55:38.077384 | orchestrator | Friday 29 August 2025 14:53:15 +0000 (0:00:00.623) 0:00:03.993 ********* 2025-08-29 14:55:38.077395 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:38.077405 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:38.077415 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:38.077424 | orchestrator | 2025-08-29 14:55:38.077434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:55:38.077443 | orchestrator | Friday 29 August 2025 14:53:16 +0000 (0:00:01.128) 0:00:05.122 ********* 2025-08-29 14:55:38.077453 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 14:55:38.077465 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 14:55:38.077475 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 14:55:38.077485 | orchestrator | 2025-08-29 14:55:38.077496 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 14:55:38.077506 | orchestrator | 2025-08-29 14:55:38.077516 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:55:38.077527 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:01.627) 0:00:06.750 ********* 2025-08-29 14:55:38.077538 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:38.077548 | orchestrator | 2025-08-29 14:55:38.077557 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:55:38.077568 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.730) 0:00:07.480 ********* 2025-08-29 14:55:38.077580 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:38.077590 | orchestrator | 2025-08-29 14:55:38.077599 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 14:55:38.077609 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.927) 0:00:08.407 ********* 2025-08-29 14:55:38.077618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.077629 | orchestrator | 2025-08-29 14:55:38.077638 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 14:55:38.077648 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.282) 0:00:08.690 ********* 2025-08-29 14:55:38.077657 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.077667 | orchestrator | 2025-08-29 14:55:38.077676 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 14:55:38.077686 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.335) 0:00:09.026 ********* 2025-08-29 14:55:38.077696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.077705 | orchestrator | 2025-08-29 14:55:38.077715 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 14:55:38.077724 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.480) 0:00:09.506 ********* 2025-08-29 14:55:38.077734 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.077743 | orchestrator | 2025-08-29 14:55:38.077753 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:55:38.077762 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.496) 0:00:10.003 ********* 2025-08-29 14:55:38.077772 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:38.077790 | orchestrator | 2025-08-29 14:55:38.077799 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:55:38.077809 | orchestrator | Friday 29 August 2025 14:53:22 +0000 (0:00:01.100) 0:00:11.104 ********* 2025-08-29 14:55:38.077818 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:38.077828 | orchestrator | 2025-08-29 14:55:38.077837 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 14:55:38.077847 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:00.913) 0:00:12.017 ********* 2025-08-29 14:55:38.077858 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.077868 | orchestrator | 2025-08-29 14:55:38.077886 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 14:55:38.077897 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.508) 0:00:12.525 ********* 2025-08-29 14:55:38.077906 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.077916 | orchestrator | 2025-08-29 14:55:38.077934 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 14:55:38.077944 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.500) 0:00:13.026 ********* 2025-08-29 14:55:38.077960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.077975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.077987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078004 | orchestrator | 2025-08-29 14:55:38.078058 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 14:55:38.078071 | orchestrator | Friday 29 August 2025 14:53:26 +0000 (0:00:01.499) 0:00:14.525 ********* 2025-08-29 14:55:38.078097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078149 | orchestrator | 2025-08-29 14:55:38.078159 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 14:55:38.078175 | orchestrator | Friday 29 August 2025 14:53:29 +0000 (0:00:03.005) 0:00:17.530 ********* 2025-08-29 14:55:38.078184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:55:38.078194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:55:38.078204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:55:38.078213 | orchestrator | 2025-08-29 14:55:38.078223 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 14:55:38.078232 | orchestrator | Friday 29 August 2025 14:53:31 +0000 (0:00:02.089) 0:00:19.620 ********* 2025-08-29 14:55:38.078242 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:55:38.078252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:55:38.078262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:55:38.078272 | orchestrator | 2025-08-29 14:55:38.078282 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 14:55:38.078292 | orchestrator | Friday 29 August 2025 14:53:34 +0000 (0:00:03.633) 0:00:23.253 ********* 2025-08-29 14:55:38.078302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:55:38.078312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:55:38.078325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:55:38.078331 | orchestrator | 2025-08-29 14:55:38.078342 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 14:55:38.078348 | orchestrator | Friday 29 August 2025 14:53:36 +0000 (0:00:01.945) 0:00:25.199 ********* 2025-08-29 14:55:38.078354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:55:38.078361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:55:38.078367 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:55:38.078373 | orchestrator | 2025-08-29 14:55:38.078379 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 14:55:38.078385 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:03.098) 0:00:28.297 ********* 2025-08-29 14:55:38.078391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:55:38.078398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:55:38.078404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:55:38.078410 | orchestrator | 2025-08-29 14:55:38.078417 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 14:55:38.078423 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:02.491) 0:00:30.789 ********* 2025-08-29 14:55:38.078429 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:55:38.078436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:55:38.078442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:55:38.078448 | orchestrator | 2025-08-29 14:55:38.078454 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:55:38.078460 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:02.030) 0:00:32.819 ********* 2025-08-29 14:55:38.078466 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.078473 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:38.078479 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:38.078500 | orchestrator | 2025-08-29 14:55:38.078506 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 14:55:38.078512 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:00.505) 0:00:33.324 ********* 2025-08-29 14:55:38.078519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:55:38.078551 | orchestrator | 2025-08-29 14:55:38.078557 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 14:55:38.078567 | orchestrator | Friday 29 August 2025 14:53:46 +0000 (0:00:01.723) 0:00:35.048 ********* 2025-08-29 14:55:38.078577 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:38.078586 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:38.078596 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:38.078606 | orchestrator | 2025-08-29 14:55:38.078616 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 14:55:38.078636 | orchestrator | Friday 29 August 2025 14:53:47 +0000 (0:00:00.793) 0:00:35.842 ********* 2025-08-29 14:55:38.078647 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:38.078658 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:38.078668 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:38.078679 | orchestrator | 2025-08-29 14:55:38.078690 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 14:55:38.078700 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:07.007) 0:00:42.850 ********* 2025-08-29 14:55:38.078710 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:38.078720 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:38.078731 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:38.078743 | orchestrator | 2025-08-29 14:55:38.078753 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:55:38.078763 | orchestrator | 2025-08-29 14:55:38.078772 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:55:38.078782 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.487) 0:00:43.337 ********* 2025-08-29 14:55:38.078791 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:38.078801 | orchestrator | 2025-08-29 14:55:38.078810 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:55:38.078820 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.659) 0:00:43.996 ********* 2025-08-29 14:55:38.078829 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:38.078838 | orchestrator | 2025-08-29 14:55:38.078848 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:55:38.078858 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.278) 0:00:44.274 ********* 2025-08-29 14:55:38.078868 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:38.078878 | orchestrator | 2025-08-29 14:55:38.078886 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:55:38.078896 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:06.934) 0:00:51.209 ********* 2025-08-29 14:55:38.078905 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:38.078915 | orchestrator | 2025-08-29 14:55:38.078924 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:55:38.078934 | orchestrator | 2025-08-29 14:55:38.078943 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:55:38.078952 | orchestrator | Friday 29 August 2025 14:54:54 +0000 (0:00:51.894) 0:01:43.103 ********* 2025-08-29 14:55:38.078962 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:38.078971 | orchestrator | 2025-08-29 14:55:38.078981 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:55:38.078990 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:00.652) 0:01:43.756 ********* 2025-08-29 14:55:38.078999 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:38.079008 | orchestrator | 2025-08-29 14:55:38.079018 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:55:38.079027 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:00.341) 0:01:44.098 ********* 2025-08-29 14:55:38.079036 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:38.079046 | orchestrator | 2025-08-29 14:55:38.079055 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:55:38.079065 | orchestrator | Friday 29 August 2025 14:54:57 +0000 (0:00:01.828) 0:01:45.926 ********* 2025-08-29 14:55:38.079074 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:38.079084 | orchestrator | 2025-08-29 14:55:38.079093 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:55:38.079103 | orchestrator | 2025-08-29 14:55:38.079112 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:55:38.079136 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:16.404) 0:02:02.331 ********* 2025-08-29 14:55:38.079146 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:38.079156 | orchestrator | 2025-08-29 14:55:38.079171 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:55:38.079181 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:00.620) 0:02:02.951 ********* 2025-08-29 14:55:38.079191 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:38.079200 | orchestrator | 2025-08-29 14:55:38.079215 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:55:38.079230 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:00.236) 0:02:03.187 ********* 2025-08-29 14:55:38.079240 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:38.079250 | orchestrator | 2025-08-29 14:55:38.079259 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:55:38.079269 | orchestrator | Friday 29 August 2025 14:55:16 +0000 (0:00:01.838) 0:02:05.026 ********* 2025-08-29 14:55:38.079278 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:38.079287 | orchestrator | 2025-08-29 14:55:38.079297 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 14:55:38.079306 | orchestrator | 2025-08-29 14:55:38.079316 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 14:55:38.079325 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:15.197) 0:02:20.224 ********* 2025-08-29 14:55:38.079335 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:38.079344 | orchestrator | 2025-08-29 14:55:38.079353 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 14:55:38.079363 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.916) 0:02:21.140 ********* 2025-08-29 14:55:38.079372 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:55:38.079382 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 14:55:38.079391 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:55:38.079401 | orchestrator | outward_rabbitmq_restart 2025-08-29 14:55:38.079410 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:38.079420 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:38.079430 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:38.079440 | orchestrator | 2025-08-29 14:55:38.079450 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 14:55:38.079461 | orchestrator | skipping: no hosts matched 2025-08-29 14:55:38.079470 | orchestrator | 2025-08-29 14:55:38.079476 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 14:55:38.079482 | orchestrator | skipping: no hosts matched 2025-08-29 14:55:38.079489 | orchestrator | 2025-08-29 14:55:38.079495 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 14:55:38.079501 | orchestrator | skipping: no hosts matched 2025-08-29 14:55:38.079507 | orchestrator | 2025-08-29 14:55:38.079513 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:55:38.079519 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 14:55:38.079526 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 14:55:38.079532 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:55:38.079538 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:55:38.079544 | orchestrator | 2025-08-29 14:55:38.079551 | orchestrator | 2025-08-29 14:55:38.079557 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:55:38.079563 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:02.824) 0:02:23.965 ********* 2025-08-29 14:55:38.079569 | orchestrator | =============================================================================== 2025-08-29 14:55:38.079580 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.50s 2025-08-29 14:55:38.079586 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.60s 2025-08-29 14:55:38.079592 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.01s 2025-08-29 14:55:38.079598 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.63s 2025-08-29 14:55:38.079604 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.10s 2025-08-29 14:55:38.079611 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.01s 2025-08-29 14:55:38.079617 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.98s 2025-08-29 14:55:38.079623 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.83s 2025-08-29 14:55:38.079629 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.49s 2025-08-29 14:55:38.079635 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.09s 2025-08-29 14:55:38.079641 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.03s 2025-08-29 14:55:38.079647 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.95s 2025-08-29 14:55:38.079653 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2025-08-29 14:55:38.079660 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.72s 2025-08-29 14:55:38.079666 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.63s 2025-08-29 14:55:38.079672 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.50s 2025-08-29 14:55:38.079678 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.13s 2025-08-29 14:55:38.079684 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.10s 2025-08-29 14:55:38.079690 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2025-08-29 14:55:38.079700 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.92s 2025-08-29 14:55:38.079706 | orchestrator | 2025-08-29 14:55:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:41.126067 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:41.131274 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:41.131347 | orchestrator | 2025-08-29 14:55:41 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:41.131357 | orchestrator | 2025-08-29 14:55:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:44.181809 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:44.182508 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:44.184465 | orchestrator | 2025-08-29 14:55:44 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:44.184525 | orchestrator | 2025-08-29 14:55:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:47.228201 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:47.229780 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:47.230567 | orchestrator | 2025-08-29 14:55:47 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:47.230592 | orchestrator | 2025-08-29 14:55:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:50.283972 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:50.292581 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:50.297274 | orchestrator | 2025-08-29 14:55:50 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:50.297347 | orchestrator | 2025-08-29 14:55:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:53.334076 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:53.334478 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:53.336862 | orchestrator | 2025-08-29 14:55:53 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:53.336898 | orchestrator | 2025-08-29 14:55:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:56.396463 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:56.398641 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:56.399640 | orchestrator | 2025-08-29 14:55:56 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:56.399675 | orchestrator | 2025-08-29 14:55:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:59.431723 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:55:59.432219 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:55:59.433867 | orchestrator | 2025-08-29 14:55:59 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:55:59.433902 | orchestrator | 2025-08-29 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:02.480487 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:02.482539 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:02.485061 | orchestrator | 2025-08-29 14:56:02 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:02.485235 | orchestrator | 2025-08-29 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:05.518345 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:05.518890 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:05.521126 | orchestrator | 2025-08-29 14:56:05 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:05.521202 | orchestrator | 2025-08-29 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:08.563072 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:08.564816 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:08.565745 | orchestrator | 2025-08-29 14:56:08 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:08.565775 | orchestrator | 2025-08-29 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:11.606139 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:11.606504 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:11.608379 | orchestrator | 2025-08-29 14:56:11 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:11.608433 | orchestrator | 2025-08-29 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:14.657085 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:14.657222 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:14.657395 | orchestrator | 2025-08-29 14:56:14 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:14.657413 | orchestrator | 2025-08-29 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:17.700867 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:17.702408 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:17.704718 | orchestrator | 2025-08-29 14:56:17 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:17.704765 | orchestrator | 2025-08-29 14:56:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:20.748402 | orchestrator | 2025-08-29 14:56:20 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:20.749734 | orchestrator | 2025-08-29 14:56:20 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:20.750436 | orchestrator | 2025-08-29 14:56:20 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:20.750646 | orchestrator | 2025-08-29 14:56:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:23.814470 | orchestrator | 2025-08-29 14:56:23 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:23.816974 | orchestrator | 2025-08-29 14:56:23 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:23.820640 | orchestrator | 2025-08-29 14:56:23 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:23.821398 | orchestrator | 2025-08-29 14:56:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:26.866344 | orchestrator | 2025-08-29 14:56:26 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:26.868961 | orchestrator | 2025-08-29 14:56:26 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:26.872260 | orchestrator | 2025-08-29 14:56:26 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:26.872520 | orchestrator | 2025-08-29 14:56:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:29.911435 | orchestrator | 2025-08-29 14:56:29 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:29.911846 | orchestrator | 2025-08-29 14:56:29 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:29.913322 | orchestrator | 2025-08-29 14:56:29 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:29.913345 | orchestrator | 2025-08-29 14:56:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:32.971455 | orchestrator | 2025-08-29 14:56:32 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:32.971538 | orchestrator | 2025-08-29 14:56:32 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:32.972468 | orchestrator | 2025-08-29 14:56:32 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:32.972527 | orchestrator | 2025-08-29 14:56:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:36.008153 | orchestrator | 2025-08-29 14:56:36 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:36.009255 | orchestrator | 2025-08-29 14:56:36 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:36.010318 | orchestrator | 2025-08-29 14:56:36 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:36.010413 | orchestrator | 2025-08-29 14:56:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:39.046890 | orchestrator | 2025-08-29 14:56:39 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state STARTED 2025-08-29 14:56:39.049411 | orchestrator | 2025-08-29 14:56:39 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:39.049925 | orchestrator | 2025-08-29 14:56:39 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:39.049941 | orchestrator | 2025-08-29 14:56:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:42.101983 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task a1ab48bf-76ad-4aab-ad3c-f95741e0d59d is in state SUCCESS 2025-08-29 14:56:42.103412 | orchestrator | 2025-08-29 14:56:42.103485 | orchestrator | 2025-08-29 14:56:42.103501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:56:42.103517 | orchestrator | 2025-08-29 14:56:42.103530 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:56:42.103544 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.187) 0:00:00.187 ********* 2025-08-29 14:56:42.103557 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:56:42.103570 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:56:42.103583 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:56:42.103596 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.103609 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.103623 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.103635 | orchestrator | 2025-08-29 14:56:42.103647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:56:42.103659 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.637) 0:00:00.824 ********* 2025-08-29 14:56:42.103672 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 14:56:42.103686 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 14:56:42.103699 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 14:56:42.103711 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 14:56:42.103724 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 14:56:42.103736 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 14:56:42.103748 | orchestrator | 2025-08-29 14:56:42.103760 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 14:56:42.103772 | orchestrator | 2025-08-29 14:56:42.103784 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 14:56:42.103798 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:01.048) 0:00:01.873 ********* 2025-08-29 14:56:42.103813 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:56:42.103828 | orchestrator | 2025-08-29 14:56:42.103841 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 14:56:42.103857 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:01.041) 0:00:02.914 ********* 2025-08-29 14:56:42.103873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.103924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.103940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.103973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.103988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104036 | orchestrator | 2025-08-29 14:56:42.104049 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 14:56:42.104063 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:01.096) 0:00:04.011 ********* 2025-08-29 14:56:42.104107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104215 | orchestrator | 2025-08-29 14:56:42.104228 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 14:56:42.104242 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:01.643) 0:00:05.654 ********* 2025-08-29 14:56:42.104254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104327 | orchestrator | 2025-08-29 14:56:42.104335 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 14:56:42.104343 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:01.347) 0:00:07.002 ********* 2025-08-29 14:56:42.104352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104450 | orchestrator | 2025-08-29 14:56:42.104463 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 14:56:42.104476 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:01.625) 0:00:08.628 ********* 2025-08-29 14:56:42.104489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.104652 | orchestrator | 2025-08-29 14:56:42.104665 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 14:56:42.104679 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:01.499) 0:00:10.127 ********* 2025-08-29 14:56:42.104692 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:56:42.104708 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:56:42.104721 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.104734 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:56:42.104748 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.104761 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.104774 | orchestrator | 2025-08-29 14:56:42.104789 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 14:56:42.104802 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:02.474) 0:00:12.602 ********* 2025-08-29 14:56:42.104816 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 14:56:42.104829 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 14:56:42.104842 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 14:56:42.104871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 14:56:42.104880 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 14:56:42.104893 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 14:56:42.104907 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:56:42.104920 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:56:42.104932 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:56:42.104944 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:56:42.104957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:56:42.104969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:56:42.104983 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:56:42.104998 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:56:42.105012 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:56:42.105025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:56:42.105040 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:56:42.105053 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:56:42.105067 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:56:42.105114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:56:42.105122 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:56:42.105130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:56:42.105137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:56:42.105145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:56:42.105153 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:56:42.105161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:56:42.105168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:56:42.105176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:56:42.105184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:56:42.105198 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:56:42.105206 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:56:42.105214 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:56:42.105222 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:56:42.105237 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:56:42.105245 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:56:42.105253 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:56:42.105261 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:56:42.105269 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:56:42.105277 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:56:42.105285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:56:42.105300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:56:42.105308 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:56:42.105316 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 14:56:42.105325 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 14:56:42.105333 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 14:56:42.105341 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 14:56:42.105349 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 14:56:42.105357 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 14:56:42.105365 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:56:42.105373 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:56:42.105381 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:56:42.105389 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:56:42.105397 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:56:42.105405 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:56:42.105413 | orchestrator | 2025-08-29 14:56:42.105423 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:56:42.105437 | orchestrator | Friday 29 August 2025 14:54:45 +0000 (0:00:18.899) 0:00:31.502 ********* 2025-08-29 14:56:42.105454 | orchestrator | 2025-08-29 14:56:42.105468 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:56:42.105478 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.278) 0:00:31.781 ********* 2025-08-29 14:56:42.105489 | orchestrator | 2025-08-29 14:56:42.105499 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:56:42.105509 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.069) 0:00:31.850 ********* 2025-08-29 14:56:42.105520 | orchestrator | 2025-08-29 14:56:42.105531 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:56:42.105548 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.069) 0:00:31.919 ********* 2025-08-29 14:56:42.105559 | orchestrator | 2025-08-29 14:56:42.105569 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:56:42.105579 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.068) 0:00:31.988 ********* 2025-08-29 14:56:42.105589 | orchestrator | 2025-08-29 14:56:42.105600 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:56:42.105611 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.064) 0:00:32.053 ********* 2025-08-29 14:56:42.105621 | orchestrator | 2025-08-29 14:56:42.105632 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 14:56:42.105642 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:00.066) 0:00:32.120 ********* 2025-08-29 14:56:42.105655 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.105668 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:56:42.105675 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:56:42.105682 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.105689 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:56:42.105695 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.105702 | orchestrator | 2025-08-29 14:56:42.105709 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 14:56:42.105715 | orchestrator | Friday 29 August 2025 14:54:48 +0000 (0:00:01.750) 0:00:33.870 ********* 2025-08-29 14:56:42.105722 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.105729 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.105736 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:56:42.105742 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:56:42.105749 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.105756 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:56:42.105762 | orchestrator | 2025-08-29 14:56:42.105769 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 14:56:42.105776 | orchestrator | 2025-08-29 14:56:42.105782 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:56:42.105789 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:36.677) 0:01:10.548 ********* 2025-08-29 14:56:42.105796 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:56:42.105803 | orchestrator | 2025-08-29 14:56:42.105809 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:56:42.105816 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:00.867) 0:01:11.416 ********* 2025-08-29 14:56:42.105823 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:56:42.105830 | orchestrator | 2025-08-29 14:56:42.105843 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 14:56:42.105850 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:00.555) 0:01:11.972 ********* 2025-08-29 14:56:42.105856 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.105863 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.105870 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.105876 | orchestrator | 2025-08-29 14:56:42.105883 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 14:56:42.105889 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:01.093) 0:01:13.065 ********* 2025-08-29 14:56:42.105896 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.105903 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.105909 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.105916 | orchestrator | 2025-08-29 14:56:42.105922 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 14:56:42.105929 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:00.344) 0:01:13.410 ********* 2025-08-29 14:56:42.105936 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.105942 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.105963 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.105969 | orchestrator | 2025-08-29 14:56:42.105976 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 14:56:42.105983 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:00.376) 0:01:13.787 ********* 2025-08-29 14:56:42.105989 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.105996 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.106003 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.106009 | orchestrator | 2025-08-29 14:56:42.106091 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 14:56:42.106108 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:00.339) 0:01:14.126 ********* 2025-08-29 14:56:42.106120 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.106131 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.106143 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.106152 | orchestrator | 2025-08-29 14:56:42.106162 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 14:56:42.106173 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:00.555) 0:01:14.682 ********* 2025-08-29 14:56:42.106183 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106194 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106205 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106215 | orchestrator | 2025-08-29 14:56:42.106225 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 14:56:42.106234 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.320) 0:01:15.002 ********* 2025-08-29 14:56:42.106244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106254 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106264 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106274 | orchestrator | 2025-08-29 14:56:42.106283 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 14:56:42.106295 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.320) 0:01:15.323 ********* 2025-08-29 14:56:42.106305 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106328 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106339 | orchestrator | 2025-08-29 14:56:42.106349 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 14:56:42.106360 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.294) 0:01:15.618 ********* 2025-08-29 14:56:42.106372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106382 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106393 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106405 | orchestrator | 2025-08-29 14:56:42.106416 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 14:56:42.106426 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.524) 0:01:16.143 ********* 2025-08-29 14:56:42.106437 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106458 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106468 | orchestrator | 2025-08-29 14:56:42.106478 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 14:56:42.106488 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.327) 0:01:16.470 ********* 2025-08-29 14:56:42.106500 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106510 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106539 | orchestrator | 2025-08-29 14:56:42.106548 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 14:56:42.106559 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.340) 0:01:16.811 ********* 2025-08-29 14:56:42.106571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106581 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106592 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106602 | orchestrator | 2025-08-29 14:56:42.106624 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 14:56:42.106636 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.315) 0:01:17.127 ********* 2025-08-29 14:56:42.106646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106657 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106677 | orchestrator | 2025-08-29 14:56:42.106687 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 14:56:42.106699 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.772) 0:01:17.899 ********* 2025-08-29 14:56:42.106710 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106721 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106731 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106742 | orchestrator | 2025-08-29 14:56:42.106753 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 14:56:42.106762 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.628) 0:01:18.528 ********* 2025-08-29 14:56:42.106772 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106783 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106793 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106804 | orchestrator | 2025-08-29 14:56:42.106827 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 14:56:42.106838 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:00.624) 0:01:19.153 ********* 2025-08-29 14:56:42.106848 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106873 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106882 | orchestrator | 2025-08-29 14:56:42.106893 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 14:56:42.106904 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:00.372) 0:01:19.525 ********* 2025-08-29 14:56:42.106914 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.106925 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.106935 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.106944 | orchestrator | 2025-08-29 14:56:42.106954 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:56:42.106965 | orchestrator | Friday 29 August 2025 14:55:34 +0000 (0:00:00.619) 0:01:20.145 ********* 2025-08-29 14:56:42.106975 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:56:42.106987 | orchestrator | 2025-08-29 14:56:42.106998 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 14:56:42.107008 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:00.908) 0:01:21.053 ********* 2025-08-29 14:56:42.107018 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.107030 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.107041 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.107051 | orchestrator | 2025-08-29 14:56:42.107062 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 14:56:42.107148 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.840) 0:01:21.894 ********* 2025-08-29 14:56:42.107161 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.107170 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.107180 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.107190 | orchestrator | 2025-08-29 14:56:42.107202 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 14:56:42.107212 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.588) 0:01:22.482 ********* 2025-08-29 14:56:42.107223 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.107234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.107244 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.107254 | orchestrator | 2025-08-29 14:56:42.107265 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 14:56:42.107290 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.299) 0:01:22.782 ********* 2025-08-29 14:56:42.107302 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.107311 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.107322 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.107332 | orchestrator | 2025-08-29 14:56:42.107343 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 14:56:42.107354 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.316) 0:01:23.099 ********* 2025-08-29 14:56:42.107365 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.107376 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.107387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.107399 | orchestrator | 2025-08-29 14:56:42.107409 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 14:56:42.107420 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.389) 0:01:23.488 ********* 2025-08-29 14:56:42.107430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.107440 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.107451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.107462 | orchestrator | 2025-08-29 14:56:42.107473 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 14:56:42.107482 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.481) 0:01:23.970 ********* 2025-08-29 14:56:42.107493 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.107504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.107514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.107525 | orchestrator | 2025-08-29 14:56:42.107536 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 14:56:42.107547 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.299) 0:01:24.269 ********* 2025-08-29 14:56:42.107568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.107580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.107591 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.107601 | orchestrator | 2025-08-29 14:56:42.107612 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:56:42.107640 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.287) 0:01:24.557 ********* 2025-08-29 14:56:42.107656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107795 | orchestrator | 2025-08-29 14:56:42.107805 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:56:42.107816 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:01.586) 0:01:26.143 ********* 2025-08-29 14:56:42.107835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.107958 | orchestrator | 2025-08-29 14:56:42.107969 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:56:42.107981 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:04.282) 0:01:30.425 ********* 2025-08-29 14:56:42.107999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108145 | orchestrator | 2025-08-29 14:56:42.108157 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:56:42.108168 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:02.110) 0:01:32.536 ********* 2025-08-29 14:56:42.108178 | orchestrator | 2025-08-29 14:56:42.108188 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:56:42.108198 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:00.061) 0:01:32.597 ********* 2025-08-29 14:56:42.108208 | orchestrator | 2025-08-29 14:56:42.108219 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:56:42.108230 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:00.066) 0:01:32.663 ********* 2025-08-29 14:56:42.108241 | orchestrator | 2025-08-29 14:56:42.108251 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:56:42.108261 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:00.066) 0:01:32.729 ********* 2025-08-29 14:56:42.108296 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.108308 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.108318 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.108328 | orchestrator | 2025-08-29 14:56:42.108338 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:56:42.108347 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:07.517) 0:01:40.247 ********* 2025-08-29 14:56:42.108357 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.108368 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.108378 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.108388 | orchestrator | 2025-08-29 14:56:42.108406 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:56:42.108417 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:03.114) 0:01:43.361 ********* 2025-08-29 14:56:42.108428 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.108438 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.108447 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.108458 | orchestrator | 2025-08-29 14:56:42.108467 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:56:42.108477 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:02.530) 0:01:45.892 ********* 2025-08-29 14:56:42.108488 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.108499 | orchestrator | 2025-08-29 14:56:42.108510 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:56:42.108520 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:00.119) 0:01:46.012 ********* 2025-08-29 14:56:42.108530 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.108541 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.108551 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.108562 | orchestrator | 2025-08-29 14:56:42.108585 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:56:42.108597 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:00.834) 0:01:46.846 ********* 2025-08-29 14:56:42.108607 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.108618 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.108629 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.108641 | orchestrator | 2025-08-29 14:56:42.108652 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:56:42.108664 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:00.625) 0:01:47.472 ********* 2025-08-29 14:56:42.108670 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.108677 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.108683 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.108690 | orchestrator | 2025-08-29 14:56:42.108697 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:56:42.108703 | orchestrator | Friday 29 August 2025 14:56:02 +0000 (0:00:01.101) 0:01:48.573 ********* 2025-08-29 14:56:42.108710 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.108717 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.108723 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.108730 | orchestrator | 2025-08-29 14:56:42.108737 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:56:42.108743 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:00.681) 0:01:49.255 ********* 2025-08-29 14:56:42.108750 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.108756 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.108763 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.108770 | orchestrator | 2025-08-29 14:56:42.108776 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:56:42.108783 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:00.796) 0:01:50.051 ********* 2025-08-29 14:56:42.108790 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.108796 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.108803 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.108809 | orchestrator | 2025-08-29 14:56:42.108816 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 14:56:42.108822 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:00.914) 0:01:50.965 ********* 2025-08-29 14:56:42.108829 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.108835 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.108842 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.108849 | orchestrator | 2025-08-29 14:56:42.108855 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:56:42.108862 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:00.614) 0:01:51.580 ********* 2025-08-29 14:56:42.108869 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108887 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108906 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108915 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108922 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108934 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108941 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108954 | orchestrator | 2025-08-29 14:56:42.108961 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:56:42.108968 | orchestrator | Friday 29 August 2025 14:56:07 +0000 (0:00:01.488) 0:01:53.069 ********* 2025-08-29 14:56:42.108980 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108987 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.108994 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109009 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109043 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109064 | orchestrator | 2025-08-29 14:56:42.109102 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:56:42.109110 | orchestrator | Friday 29 August 2025 14:56:11 +0000 (0:00:03.993) 0:01:57.063 ********* 2025-08-29 14:56:42.109122 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109131 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109143 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109220 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:56:42.109231 | orchestrator | 2025-08-29 14:56:42.109242 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:56:42.109262 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:02.993) 0:02:00.057 ********* 2025-08-29 14:56:42.109273 | orchestrator | 2025-08-29 14:56:42.109282 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:56:42.109288 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:00.066) 0:02:00.123 ********* 2025-08-29 14:56:42.109295 | orchestrator | 2025-08-29 14:56:42.109301 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:56:42.109308 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:00.293) 0:02:00.417 ********* 2025-08-29 14:56:42.109315 | orchestrator | 2025-08-29 14:56:42.109321 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:56:42.109328 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:00.067) 0:02:00.485 ********* 2025-08-29 14:56:42.109336 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.109347 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.109358 | orchestrator | 2025-08-29 14:56:42.109415 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:56:42.109431 | orchestrator | Friday 29 August 2025 14:56:21 +0000 (0:00:06.254) 0:02:06.739 ********* 2025-08-29 14:56:42.109442 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.109453 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.109460 | orchestrator | 2025-08-29 14:56:42.109467 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:56:42.109474 | orchestrator | Friday 29 August 2025 14:56:27 +0000 (0:00:06.224) 0:02:12.963 ********* 2025-08-29 14:56:42.109481 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:56:42.109487 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:56:42.109494 | orchestrator | 2025-08-29 14:56:42.109500 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:56:42.109507 | orchestrator | Friday 29 August 2025 14:56:33 +0000 (0:00:06.147) 0:02:19.111 ********* 2025-08-29 14:56:42.109514 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:56:42.109521 | orchestrator | 2025-08-29 14:56:42.109527 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:56:42.109534 | orchestrator | Friday 29 August 2025 14:56:33 +0000 (0:00:00.130) 0:02:19.242 ********* 2025-08-29 14:56:42.109540 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.109547 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.109554 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.109560 | orchestrator | 2025-08-29 14:56:42.109567 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:56:42.109573 | orchestrator | Friday 29 August 2025 14:56:34 +0000 (0:00:00.770) 0:02:20.012 ********* 2025-08-29 14:56:42.109580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.109586 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.109593 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.109600 | orchestrator | 2025-08-29 14:56:42.109606 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:56:42.109613 | orchestrator | Friday 29 August 2025 14:56:35 +0000 (0:00:00.763) 0:02:20.776 ********* 2025-08-29 14:56:42.109619 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.109626 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.109632 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.109639 | orchestrator | 2025-08-29 14:56:42.109649 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:56:42.109656 | orchestrator | Friday 29 August 2025 14:56:35 +0000 (0:00:00.832) 0:02:21.608 ********* 2025-08-29 14:56:42.109663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:56:42.109669 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:56:42.109676 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:56:42.109682 | orchestrator | 2025-08-29 14:56:42.109689 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:56:42.109695 | orchestrator | Friday 29 August 2025 14:56:36 +0000 (0:00:00.774) 0:02:22.383 ********* 2025-08-29 14:56:42.109708 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.109715 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.109721 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.109728 | orchestrator | 2025-08-29 14:56:42.109734 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:56:42.109741 | orchestrator | Friday 29 August 2025 14:56:37 +0000 (0:00:00.822) 0:02:23.206 ********* 2025-08-29 14:56:42.109748 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:56:42.109754 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:56:42.109761 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:56:42.109767 | orchestrator | 2025-08-29 14:56:42.109774 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:56:42.109781 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 14:56:42.109789 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:56:42.109803 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:56:42.109810 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:56:42.109818 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:56:42.109824 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:56:42.109831 | orchestrator | 2025-08-29 14:56:42.109838 | orchestrator | 2025-08-29 14:56:42.109844 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:56:42.109851 | orchestrator | Friday 29 August 2025 14:56:38 +0000 (0:00:01.198) 0:02:24.404 ********* 2025-08-29 14:56:42.109858 | orchestrator | =============================================================================== 2025-08-29 14:56:42.109865 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.68s 2025-08-29 14:56:42.109871 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.90s 2025-08-29 14:56:42.109878 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.77s 2025-08-29 14:56:42.109885 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.34s 2025-08-29 14:56:42.109891 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.68s 2025-08-29 14:56:42.109898 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.28s 2025-08-29 14:56:42.109904 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.99s 2025-08-29 14:56:42.109911 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.99s 2025-08-29 14:56:42.109918 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.47s 2025-08-29 14:56:42.109929 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.11s 2025-08-29 14:56:42.109940 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.75s 2025-08-29 14:56:42.109951 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.64s 2025-08-29 14:56:42.109963 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.63s 2025-08-29 14:56:42.109974 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2025-08-29 14:56:42.109985 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.50s 2025-08-29 14:56:42.109994 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-08-29 14:56:42.110000 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.35s 2025-08-29 14:56:42.110013 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.20s 2025-08-29 14:56:42.110125 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 1.10s 2025-08-29 14:56:42.110132 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.10s 2025-08-29 14:56:42.110140 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:42.110147 | orchestrator | 2025-08-29 14:56:42 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:42.110154 | orchestrator | 2025-08-29 14:56:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:45.170369 | orchestrator | 2025-08-29 14:56:45 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:45.170760 | orchestrator | 2025-08-29 14:56:45 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:45.170859 | orchestrator | 2025-08-29 14:56:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:48.212171 | orchestrator | 2025-08-29 14:56:48 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:48.214277 | orchestrator | 2025-08-29 14:56:48 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:48.214339 | orchestrator | 2025-08-29 14:56:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:51.263519 | orchestrator | 2025-08-29 14:56:51 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:51.264373 | orchestrator | 2025-08-29 14:56:51 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:51.264401 | orchestrator | 2025-08-29 14:56:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:54.329688 | orchestrator | 2025-08-29 14:56:54 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:54.329785 | orchestrator | 2025-08-29 14:56:54 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:54.329799 | orchestrator | 2025-08-29 14:56:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:57.387777 | orchestrator | 2025-08-29 14:56:57 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:56:57.387886 | orchestrator | 2025-08-29 14:56:57 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:56:57.387898 | orchestrator | 2025-08-29 14:56:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:00.455554 | orchestrator | 2025-08-29 14:57:00 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:00.458203 | orchestrator | 2025-08-29 14:57:00 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:00.458856 | orchestrator | 2025-08-29 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:03.502856 | orchestrator | 2025-08-29 14:57:03 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:03.504284 | orchestrator | 2025-08-29 14:57:03 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:03.504413 | orchestrator | 2025-08-29 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:06.535486 | orchestrator | 2025-08-29 14:57:06 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:06.535591 | orchestrator | 2025-08-29 14:57:06 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:06.535615 | orchestrator | 2025-08-29 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:09.580946 | orchestrator | 2025-08-29 14:57:09 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:09.583502 | orchestrator | 2025-08-29 14:57:09 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:09.583923 | orchestrator | 2025-08-29 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:12.629733 | orchestrator | 2025-08-29 14:57:12 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:12.630582 | orchestrator | 2025-08-29 14:57:12 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:12.630621 | orchestrator | 2025-08-29 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:15.658116 | orchestrator | 2025-08-29 14:57:15 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:15.659802 | orchestrator | 2025-08-29 14:57:15 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:15.659840 | orchestrator | 2025-08-29 14:57:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:18.701478 | orchestrator | 2025-08-29 14:57:18 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:18.702732 | orchestrator | 2025-08-29 14:57:18 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:18.702744 | orchestrator | 2025-08-29 14:57:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:21.744543 | orchestrator | 2025-08-29 14:57:21 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:21.747733 | orchestrator | 2025-08-29 14:57:21 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:21.747764 | orchestrator | 2025-08-29 14:57:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:24.789725 | orchestrator | 2025-08-29 14:57:24 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:24.792707 | orchestrator | 2025-08-29 14:57:24 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:24.792750 | orchestrator | 2025-08-29 14:57:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:27.841295 | orchestrator | 2025-08-29 14:57:27 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:27.842811 | orchestrator | 2025-08-29 14:57:27 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:27.843060 | orchestrator | 2025-08-29 14:57:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:30.883919 | orchestrator | 2025-08-29 14:57:30 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:30.886288 | orchestrator | 2025-08-29 14:57:30 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:30.886339 | orchestrator | 2025-08-29 14:57:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:33.923227 | orchestrator | 2025-08-29 14:57:33 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:33.924565 | orchestrator | 2025-08-29 14:57:33 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:33.924623 | orchestrator | 2025-08-29 14:57:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:36.978504 | orchestrator | 2025-08-29 14:57:36 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:36.980113 | orchestrator | 2025-08-29 14:57:36 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:36.980217 | orchestrator | 2025-08-29 14:57:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:40.037494 | orchestrator | 2025-08-29 14:57:40 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:40.037663 | orchestrator | 2025-08-29 14:57:40 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:40.038181 | orchestrator | 2025-08-29 14:57:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:43.097098 | orchestrator | 2025-08-29 14:57:43 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:43.097647 | orchestrator | 2025-08-29 14:57:43 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:43.097858 | orchestrator | 2025-08-29 14:57:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:46.158497 | orchestrator | 2025-08-29 14:57:46 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:46.162549 | orchestrator | 2025-08-29 14:57:46 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:46.162658 | orchestrator | 2025-08-29 14:57:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:49.202306 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:49.203482 | orchestrator | 2025-08-29 14:57:49 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:49.203496 | orchestrator | 2025-08-29 14:57:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:52.234840 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:52.240122 | orchestrator | 2025-08-29 14:57:52 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:52.240199 | orchestrator | 2025-08-29 14:57:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:55.280081 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:55.282639 | orchestrator | 2025-08-29 14:57:55 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:55.283043 | orchestrator | 2025-08-29 14:57:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:58.334295 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:57:58.335393 | orchestrator | 2025-08-29 14:57:58 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:57:58.335620 | orchestrator | 2025-08-29 14:57:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:01.377051 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:01.380490 | orchestrator | 2025-08-29 14:58:01 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:01.380546 | orchestrator | 2025-08-29 14:58:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:04.436023 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:04.436468 | orchestrator | 2025-08-29 14:58:04 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:04.436491 | orchestrator | 2025-08-29 14:58:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:07.483504 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:07.485043 | orchestrator | 2025-08-29 14:58:07 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:07.485088 | orchestrator | 2025-08-29 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:10.531440 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:10.532264 | orchestrator | 2025-08-29 14:58:10 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:10.532317 | orchestrator | 2025-08-29 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:13.579547 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:13.583070 | orchestrator | 2025-08-29 14:58:13 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:13.583156 | orchestrator | 2025-08-29 14:58:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:16.616375 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:16.617083 | orchestrator | 2025-08-29 14:58:16 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:16.617104 | orchestrator | 2025-08-29 14:58:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:19.669632 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:19.671965 | orchestrator | 2025-08-29 14:58:19 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:19.672144 | orchestrator | 2025-08-29 14:58:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:22.722319 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:22.723885 | orchestrator | 2025-08-29 14:58:22 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:22.723928 | orchestrator | 2025-08-29 14:58:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:25.775298 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:25.776835 | orchestrator | 2025-08-29 14:58:25 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:25.777153 | orchestrator | 2025-08-29 14:58:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:28.817145 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:28.818827 | orchestrator | 2025-08-29 14:58:28 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:28.818898 | orchestrator | 2025-08-29 14:58:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:31.867623 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:31.869399 | orchestrator | 2025-08-29 14:58:31 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:31.869431 | orchestrator | 2025-08-29 14:58:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:34.923680 | orchestrator | 2025-08-29 14:58:34 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:34.926090 | orchestrator | 2025-08-29 14:58:34 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:34.926134 | orchestrator | 2025-08-29 14:58:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:37.972280 | orchestrator | 2025-08-29 14:58:37 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:37.974723 | orchestrator | 2025-08-29 14:58:37 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:37.975236 | orchestrator | 2025-08-29 14:58:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:41.044411 | orchestrator | 2025-08-29 14:58:41 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:41.045642 | orchestrator | 2025-08-29 14:58:41 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:41.045835 | orchestrator | 2025-08-29 14:58:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:44.087512 | orchestrator | 2025-08-29 14:58:44 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:44.091866 | orchestrator | 2025-08-29 14:58:44 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:44.091923 | orchestrator | 2025-08-29 14:58:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:47.144711 | orchestrator | 2025-08-29 14:58:47 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:47.146537 | orchestrator | 2025-08-29 14:58:47 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:47.147105 | orchestrator | 2025-08-29 14:58:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:50.197732 | orchestrator | 2025-08-29 14:58:50 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:50.203862 | orchestrator | 2025-08-29 14:58:50 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:50.204024 | orchestrator | 2025-08-29 14:58:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:53.231687 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:53.232096 | orchestrator | 2025-08-29 14:58:53 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:53.232193 | orchestrator | 2025-08-29 14:58:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:56.284546 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:56.284618 | orchestrator | 2025-08-29 14:58:56 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:56.284624 | orchestrator | 2025-08-29 14:58:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:59.324781 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:58:59.328147 | orchestrator | 2025-08-29 14:58:59 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:58:59.328207 | orchestrator | 2025-08-29 14:58:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:02.382699 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:59:02.385048 | orchestrator | 2025-08-29 14:59:02 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:02.385097 | orchestrator | 2025-08-29 14:59:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:05.430460 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:59:05.432522 | orchestrator | 2025-08-29 14:59:05 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:05.432615 | orchestrator | 2025-08-29 14:59:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:08.473668 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:59:08.475404 | orchestrator | 2025-08-29 14:59:08 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:08.475504 | orchestrator | 2025-08-29 14:59:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:11.526194 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:59:11.527828 | orchestrator | 2025-08-29 14:59:11 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:11.528765 | orchestrator | 2025-08-29 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:14.573937 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:59:14.575146 | orchestrator | 2025-08-29 14:59:14 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:14.575512 | orchestrator | 2025-08-29 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:17.628020 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state STARTED 2025-08-29 14:59:17.628169 | orchestrator | 2025-08-29 14:59:17 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:17.628233 | orchestrator | 2025-08-29 14:59:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:20.693136 | orchestrator | 2025-08-29 14:59:20.693207 | orchestrator | 2025-08-29 14:59:20.693213 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:59:20.693218 | orchestrator | 2025-08-29 14:59:20.693222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:59:20.693226 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-08-29 14:59:20.693230 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.693236 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.693240 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.693243 | orchestrator | 2025-08-29 14:59:20.693247 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:59:20.693251 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.391) 0:00:00.642 ********* 2025-08-29 14:59:20.693256 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 14:59:20.693260 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 14:59:20.693264 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 14:59:20.693269 | orchestrator | 2025-08-29 14:59:20.693272 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 14:59:20.693276 | orchestrator | 2025-08-29 14:59:20.693280 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 14:59:20.693284 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:01.006) 0:00:01.649 ********* 2025-08-29 14:59:20.693288 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.693292 | orchestrator | 2025-08-29 14:59:20.693296 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 14:59:20.693300 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:00.789) 0:00:02.439 ********* 2025-08-29 14:59:20.693303 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.693307 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.693311 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.693314 | orchestrator | 2025-08-29 14:59:20.693318 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 14:59:20.693340 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:01.138) 0:00:03.577 ********* 2025-08-29 14:59:20.693344 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.693348 | orchestrator | 2025-08-29 14:59:20.693351 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 14:59:20.693355 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:00.920) 0:00:04.498 ********* 2025-08-29 14:59:20.693359 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.693363 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.693366 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.693370 | orchestrator | 2025-08-29 14:59:20.693374 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 14:59:20.693377 | orchestrator | Friday 29 August 2025 14:52:56 +0000 (0:00:01.032) 0:00:05.530 ********* 2025-08-29 14:59:20.693381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:59:20.693385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:59:20.693389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:59:20.693393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:59:20.693396 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:59:20.693400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 14:59:20.693404 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:59:20.693409 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:59:20.693413 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 14:59:20.693416 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:59:20.693444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:59:20.693449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 14:59:20.693452 | orchestrator | 2025-08-29 14:59:20.693456 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:59:20.693460 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:02.532) 0:00:08.063 ********* 2025-08-29 14:59:20.693464 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 14:59:20.693468 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 14:59:20.693472 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 14:59:20.693575 | orchestrator | 2025-08-29 14:59:20.693590 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:59:20.693594 | orchestrator | Friday 29 August 2025 14:53:00 +0000 (0:00:01.145) 0:00:09.208 ********* 2025-08-29 14:59:20.693598 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 14:59:20.693602 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 14:59:20.693606 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 14:59:20.693631 | orchestrator | 2025-08-29 14:59:20.693635 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:59:20.693639 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:01.755) 0:00:10.963 ********* 2025-08-29 14:59:20.693643 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 14:59:20.693647 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.693662 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 14:59:20.693666 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.693670 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 14:59:20.693679 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.693682 | orchestrator | 2025-08-29 14:59:20.693686 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 14:59:20.693690 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:00.875) 0:00:11.839 ********* 2025-08-29 14:59:20.693697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.693705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.693709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.693713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.693720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.693727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.693736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.693741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.693745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.693749 | orchestrator | 2025-08-29 14:59:20.693753 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 14:59:20.693757 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:03.645) 0:00:15.485 ********* 2025-08-29 14:59:20.693761 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.693764 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.693784 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.693789 | orchestrator | 2025-08-29 14:59:20.693793 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 14:59:20.693796 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:01.962) 0:00:17.447 ********* 2025-08-29 14:59:20.693800 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 14:59:20.693804 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 14:59:20.693808 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 14:59:20.693812 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 14:59:20.693815 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 14:59:20.693819 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 14:59:20.693823 | orchestrator | 2025-08-29 14:59:20.693827 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 14:59:20.693831 | orchestrator | Friday 29 August 2025 14:53:10 +0000 (0:00:01.948) 0:00:19.402 ********* 2025-08-29 14:59:20.693834 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.693838 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.693844 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.693850 | orchestrator | 2025-08-29 14:59:20.693856 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 14:59:20.693862 | orchestrator | Friday 29 August 2025 14:53:11 +0000 (0:00:01.625) 0:00:21.028 ********* 2025-08-29 14:59:20.693868 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.693873 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.693879 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.693884 | orchestrator | 2025-08-29 14:59:20.693890 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 14:59:20.693954 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:01.282) 0:00:22.311 ********* 2025-08-29 14:59:20.693967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.694011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.694120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:59:20.694131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.694135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.694139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.694148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:59:20.694163 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.694167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.694171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.694175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:59:20.694188 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.694192 | orchestrator | 2025-08-29 14:59:20.694195 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 14:59:20.694199 | orchestrator | Friday 29 August 2025 14:53:14 +0000 (0:00:00.983) 0:00:23.294 ********* 2025-08-29 14:59:20.694203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:59:20.694237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:59:20.694271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7', '__omit_place_holder__adc0ba8891b4d51fa4a3b2f4d0f88be1c3347ad7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 14:59:20.694286 | orchestrator | 2025-08-29 14:59:20.694289 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 14:59:20.694293 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:05.237) 0:00:28.532 ********* 2025-08-29 14:59:20.694297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.694340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.694366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.694370 | orchestrator | 2025-08-29 14:59:20.694374 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 14:59:20.694378 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:03.561) 0:00:32.093 ********* 2025-08-29 14:59:20.694420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:59:20.694455 | orchestrator | changed: [testbed-node-1] => (item=/ansible/r2025-08-29 14:59:20 | INFO  | Task 86fa6f78-a8e4-4c0d-bc9a-243cf1c1cbb0 is in state SUCCESS 2025-08-29 14:59:20.694566 | orchestrator | oles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:59:20.694624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 14:59:20.694630 | orchestrator | 2025-08-29 14:59:20.694635 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 14:59:20.694640 | orchestrator | Friday 29 August 2025 14:53:25 +0000 (0:00:02.140) 0:00:34.234 ********* 2025-08-29 14:59:20.694644 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:59:20.694649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:59:20.694653 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 14:59:20.694657 | orchestrator | 2025-08-29 14:59:20.694661 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 14:59:20.694665 | orchestrator | Friday 29 August 2025 14:53:31 +0000 (0:00:06.520) 0:00:40.755 ********* 2025-08-29 14:59:20.694669 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.694674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.694677 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.694681 | orchestrator | 2025-08-29 14:59:20.694685 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 14:59:20.694701 | orchestrator | Friday 29 August 2025 14:53:33 +0000 (0:00:01.596) 0:00:42.351 ********* 2025-08-29 14:59:20.694706 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:59:20.694710 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:59:20.694714 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 14:59:20.694718 | orchestrator | 2025-08-29 14:59:20.694722 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 14:59:20.694726 | orchestrator | Friday 29 August 2025 14:53:36 +0000 (0:00:03.269) 0:00:45.621 ********* 2025-08-29 14:59:20.694730 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:59:20.694735 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:59:20.694738 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 14:59:20.694742 | orchestrator | 2025-08-29 14:59:20.694746 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 14:59:20.694750 | orchestrator | Friday 29 August 2025 14:53:40 +0000 (0:00:03.556) 0:00:49.178 ********* 2025-08-29 14:59:20.694754 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 14:59:20.694758 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 14:59:20.694762 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 14:59:20.694766 | orchestrator | 2025-08-29 14:59:20.694770 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 14:59:20.694773 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:02.731) 0:00:51.910 ********* 2025-08-29 14:59:20.694777 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 14:59:20.694781 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 14:59:20.694785 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 14:59:20.694788 | orchestrator | 2025-08-29 14:59:20.694792 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 14:59:20.694796 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:02.173) 0:00:54.084 ********* 2025-08-29 14:59:20.694800 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.694804 | orchestrator | 2025-08-29 14:59:20.694807 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 14:59:20.694811 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:00.700) 0:00:54.784 ********* 2025-08-29 14:59:20.694821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.694884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.694890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.694898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.694941 | orchestrator | 2025-08-29 14:59:20.694946 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 14:59:20.694950 | orchestrator | Friday 29 August 2025 14:53:49 +0000 (0:00:03.753) 0:00:58.538 ********* 2025-08-29 14:59:20.694954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.694958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.694962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.694971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.694976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.694989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.694993 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.694997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695010 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695014 | orchestrator | 2025-08-29 14:59:20.695018 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 14:59:20.695054 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:00.883) 0:00:59.422 ********* 2025-08-29 14:59:20.695059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695097 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695119 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695123 | orchestrator | 2025-08-29 14:59:20.695127 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 14:59:20.695132 | orchestrator | Friday 29 August 2025 14:53:51 +0000 (0:00:01.528) 0:01:00.951 ********* 2025-08-29 14:59:20.695139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695176 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695198 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695202 | orchestrator | 2025-08-29 14:59:20.695207 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 14:59:20.695212 | orchestrator | Friday 29 August 2025 14:53:52 +0000 (0:00:00.704) 0:01:01.655 ********* 2025-08-29 14:59:20.695217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695239 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695261 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695282 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695287 | orchestrator | 2025-08-29 14:59:20.695291 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 14:59:20.695295 | orchestrator | Friday 29 August 2025 14:53:53 +0000 (0:00:00.773) 0:01:02.428 ********* 2025-08-29 14:59:20.695303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695321 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695362 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695366 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695370 | orchestrator | 2025-08-29 14:59:20.695375 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 14:59:20.695379 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:01.136) 0:01:03.565 ********* 2025-08-29 14:59:20.695383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695400 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695453 | orchestrator | 2025-08-29 14:59:20.695457 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 14:59:20.695462 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.786) 0:01:04.351 ********* 2025-08-29 14:59:20.695468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695489 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695510 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695530 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695534 | orchestrator | 2025-08-29 14:59:20.695538 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 14:59:20.695544 | orchestrator | Friday 29 August 2025 14:53:56 +0000 (0:00:00.721) 0:01:05.073 ********* 2025-08-29 14:59:20.695549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695566 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 14:59:20.695598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 14:59:20.695608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 14:59:20.695615 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695621 | orchestrator | 2025-08-29 14:59:20.695627 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 14:59:20.695633 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:01.227) 0:01:06.301 ********* 2025-08-29 14:59:20.695637 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:59:20.695641 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:59:20.695645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 14:59:20.695649 | orchestrator | 2025-08-29 14:59:20.695653 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 14:59:20.695657 | orchestrator | Friday 29 August 2025 14:53:58 +0000 (0:00:01.452) 0:01:07.753 ********* 2025-08-29 14:59:20.695661 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:59:20.695665 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:59:20.695671 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 14:59:20.695678 | orchestrator | 2025-08-29 14:59:20.695684 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 14:59:20.695691 | orchestrator | Friday 29 August 2025 14:54:00 +0000 (0:00:01.631) 0:01:09.385 ********* 2025-08-29 14:59:20.695698 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:59:20.695705 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:59:20.695712 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 14:59:20.695719 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:59:20.695725 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.695732 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:59:20.695739 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.695750 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 14:59:20.695757 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.695763 | orchestrator | 2025-08-29 14:59:20.695769 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 14:59:20.695776 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:01.093) 0:01:10.479 ********* 2025-08-29 14:59:20.695788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.695802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.695807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 14:59:20.695811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.695815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.695823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 14:59:20.695827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.695838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.695843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 14:59:20.695848 | orchestrator | 2025-08-29 14:59:20.695852 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 14:59:20.695856 | orchestrator | Friday 29 August 2025 14:54:04 +0000 (0:00:03.114) 0:01:13.594 ********* 2025-08-29 14:59:20.695860 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.695864 | orchestrator | 2025-08-29 14:59:20.695870 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 14:59:20.695877 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:00.730) 0:01:14.324 ********* 2025-08-29 14:59:20.695886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:59:20.695894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.695929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:59:20.698223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.698307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 14:59:20.698375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.698400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698417 | orchestrator | 2025-08-29 14:59:20.698426 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 14:59:20.698434 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:03.669) 0:01:17.994 ********* 2025-08-29 14:59:20.698444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:59:20.698452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.698460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698489 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.698505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:59:20.698514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.698521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698531 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.698536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 14:59:20.698549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.698557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.698571 | orchestrator | 2025-08-29 14:59:20.698576 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 14:59:20.698581 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:00.734) 0:01:18.728 ********* 2025-08-29 14:59:20.698586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:59:20.698592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:59:20.698598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.698603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:59:20.698608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:59:20.698612 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.698617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:59:20.698626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 14:59:20.698634 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.698641 | orchestrator | 2025-08-29 14:59:20.698649 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 14:59:20.698655 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.886) 0:01:19.615 ********* 2025-08-29 14:59:20.698660 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.698665 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.698669 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.698674 | orchestrator | 2025-08-29 14:59:20.698678 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 14:59:20.698683 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:01.629) 0:01:21.244 ********* 2025-08-29 14:59:20.698688 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.698692 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.698697 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.698704 | orchestrator | 2025-08-29 14:59:20.698711 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 14:59:20.698718 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:01.852) 0:01:23.097 ********* 2025-08-29 14:59:20.698726 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.698733 | orchestrator | 2025-08-29 14:59:20.698745 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 14:59:20.698750 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.643) 0:01:23.741 ********* 2025-08-29 14:59:20.698760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.698767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.698792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.698834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698855 | orchestrator | 2025-08-29 14:59:20.698862 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 14:59:20.698870 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:03.651) 0:01:27.392 ********* 2025-08-29 14:59:20.698878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.698886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.698939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.698954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.698963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.698998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699037 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699045 | orchestrator | 2025-08-29 14:59:20.699053 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 14:59:20.699061 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:00.865) 0:01:28.257 ********* 2025-08-29 14:59:20.699079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699096 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699116 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699130 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699134 | orchestrator | 2025-08-29 14:59:20.699139 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 14:59:20.699143 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:00.864) 0:01:29.122 ********* 2025-08-29 14:59:20.699148 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.699152 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.699157 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.699162 | orchestrator | 2025-08-29 14:59:20.699166 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 14:59:20.699171 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:01.466) 0:01:30.589 ********* 2025-08-29 14:59:20.699175 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.699180 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.699185 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.699189 | orchestrator | 2025-08-29 14:59:20.699194 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 14:59:20.699198 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:02.281) 0:01:32.870 ********* 2025-08-29 14:59:20.699203 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699208 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699212 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699217 | orchestrator | 2025-08-29 14:59:20.699221 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 14:59:20.699226 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.517) 0:01:33.388 ********* 2025-08-29 14:59:20.699230 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.699235 | orchestrator | 2025-08-29 14:59:20.699243 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 14:59:20.699248 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.695) 0:01:34.083 ********* 2025-08-29 14:59:20.699259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:59:20.699270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:59:20.699275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 14:59:20.699280 | orchestrator | 2025-08-29 14:59:20.699285 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 14:59:20.699289 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:02.547) 0:01:36.631 ********* 2025-08-29 14:59:20.699294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:59:20.699299 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:59:20.699314 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 14:59:20.699328 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699333 | orchestrator | 2025-08-29 14:59:20.699337 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 14:59:20.699342 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:03.065) 0:01:39.696 ********* 2025-08-29 14:59:20.699353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:59:20.699359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:59:20.699365 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:59:20.699375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:59:20.699380 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:59:20.699393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 14:59:20.699402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699407 | orchestrator | 2025-08-29 14:59:20.699412 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 14:59:20.699416 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:01.768) 0:01:41.465 ********* 2025-08-29 14:59:20.699421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699425 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699430 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699434 | orchestrator | 2025-08-29 14:59:20.699439 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 14:59:20.699444 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:00.440) 0:01:41.905 ********* 2025-08-29 14:59:20.699448 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699453 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699457 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699462 | orchestrator | 2025-08-29 14:59:20.699467 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 14:59:20.699475 | orchestrator | Friday 29 August 2025 14:54:34 +0000 (0:00:01.542) 0:01:43.448 ********* 2025-08-29 14:59:20.699479 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.699484 | orchestrator | 2025-08-29 14:59:20.699491 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 14:59:20.699498 | orchestrator | Friday 29 August 2025 14:54:35 +0000 (0:00:00.915) 0:01:44.364 ********* 2025-08-29 14:59:20.699506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.699515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.699523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.699584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699602 | orchestrator | 2025-08-29 14:59:20.699609 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 14:59:20.699617 | orchestrator | Friday 29 August 2025 14:54:38 +0000 (0:00:03.217) 0:01:47.581 ********* 2025-08-29 14:59:20.699625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.699637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699661 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.699678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699699 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.699713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.699734 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699738 | orchestrator | 2025-08-29 14:59:20.699743 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 14:59:20.699747 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:00.581) 0:01:48.163 ********* 2025-08-29 14:59:20.699752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699765 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 14:59:20.699797 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699802 | orchestrator | 2025-08-29 14:59:20.699806 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 14:59:20.699811 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:01.152) 0:01:49.315 ********* 2025-08-29 14:59:20.699815 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.699820 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.699825 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.699829 | orchestrator | 2025-08-29 14:59:20.699834 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 14:59:20.699838 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:01.302) 0:01:50.618 ********* 2025-08-29 14:59:20.699843 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.699847 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.699852 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.699857 | orchestrator | 2025-08-29 14:59:20.699861 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 14:59:20.699866 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:01.857) 0:01:52.476 ********* 2025-08-29 14:59:20.699870 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699881 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699886 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699890 | orchestrator | 2025-08-29 14:59:20.699895 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 14:59:20.699899 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:00.279) 0:01:52.755 ********* 2025-08-29 14:59:20.699904 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.699927 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.699934 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.699942 | orchestrator | 2025-08-29 14:59:20.699949 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 14:59:20.699956 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.504) 0:01:53.260 ********* 2025-08-29 14:59:20.699964 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.699972 | orchestrator | 2025-08-29 14:59:20.699977 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 14:59:20.699982 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.784) 0:01:54.044 ********* 2025-08-29 14:59:20.699987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:59:20.699992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:59:20.700000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:59:20.700025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:59:20.700035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 14:59:20.700193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:59:20.700209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700242 | orchestrator | 2025-08-29 14:59:20.700251 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 14:59:20.700262 | orchestrator | Friday 29 August 2025 14:54:48 +0000 (0:00:03.814) 0:01:57.859 ********* 2025-08-29 14:59:20.700283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:59:20.700296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:59:20.700301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700337 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.700358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:59:20.700366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:59:20.700374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 14:59:20.700411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 14:59:20.700434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700466 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.700480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.700511 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.700516 | orchestrator | 2025-08-29 14:59:20.700520 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 14:59:20.700525 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:01.146) 0:01:59.006 ********* 2025-08-29 14:59:20.700532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:59:20.700540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:59:20.700548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:59:20.700556 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.700563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:59:20.700571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.700577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:59:20.700582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 14:59:20.700586 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.700592 | orchestrator | 2025-08-29 14:59:20.700600 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 14:59:20.700608 | orchestrator | Friday 29 August 2025 14:54:51 +0000 (0:00:01.184) 0:02:00.190 ********* 2025-08-29 14:59:20.700615 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.700622 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.700626 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.700631 | orchestrator | 2025-08-29 14:59:20.700635 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 14:59:20.700644 | orchestrator | Friday 29 August 2025 14:54:52 +0000 (0:00:01.262) 0:02:01.452 ********* 2025-08-29 14:59:20.700649 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.700654 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.700658 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.700663 | orchestrator | 2025-08-29 14:59:20.700667 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 14:59:20.700672 | orchestrator | Friday 29 August 2025 14:54:54 +0000 (0:00:02.194) 0:02:03.647 ********* 2025-08-29 14:59:20.700677 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.700681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.700686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.700690 | orchestrator | 2025-08-29 14:59:20.700695 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 14:59:20.700702 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:00.543) 0:02:04.190 ********* 2025-08-29 14:59:20.700707 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.700712 | orchestrator | 2025-08-29 14:59:20.700716 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 14:59:20.700721 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:01.040) 0:02:05.231 ********* 2025-08-29 14:59:20.700738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:59:20.700749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:59:20.700772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.700781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.700802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 14:59:20.700812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.700825 | orchestrator | 2025-08-29 14:59:20.700833 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 14:59:20.700841 | orchestrator | Friday 29 August 2025 14:55:00 +0000 (0:00:04.083) 0:02:09.315 ********* 2025-08-29 14:59:20.700863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:59:20.700870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:59:20.700878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.700902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.700966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.700975 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.700983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 14:59:20.701010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.701020 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701027 | orchestrator | 2025-08-29 14:59:20.701032 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 14:59:20.701040 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:02.651) 0:02:11.967 ********* 2025-08-29 14:59:20.701048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:59:20.701078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:59:20.701084 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:59:20.701099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:59:20.701106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:59:20.701134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 14:59:20.701140 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701145 | orchestrator | 2025-08-29 14:59:20.701149 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 14:59:20.701154 | orchestrator | Friday 29 August 2025 14:55:05 +0000 (0:00:02.900) 0:02:14.868 ********* 2025-08-29 14:59:20.701159 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.701167 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.701175 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.701182 | orchestrator | 2025-08-29 14:59:20.701190 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 14:59:20.701198 | orchestrator | Friday 29 August 2025 14:55:07 +0000 (0:00:01.358) 0:02:16.226 ********* 2025-08-29 14:59:20.701205 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.701213 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.701221 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.701233 | orchestrator | 2025-08-29 14:59:20.701238 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 14:59:20.701242 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:02.011) 0:02:18.238 ********* 2025-08-29 14:59:20.701247 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701256 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701260 | orchestrator | 2025-08-29 14:59:20.701265 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 14:59:20.701269 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:00.568) 0:02:18.806 ********* 2025-08-29 14:59:20.701274 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.701279 | orchestrator | 2025-08-29 14:59:20.701283 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 14:59:20.701288 | orchestrator | Friday 29 August 2025 14:55:10 +0000 (0:00:00.853) 0:02:19.659 ********* 2025-08-29 14:59:20.701293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:59:20.701298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:59:20.701308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 14:59:20.701313 | orchestrator | 2025-08-29 14:59:20.701317 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 14:59:20.701322 | orchestrator | Friday 29 August 2025 14:55:13 +0000 (0:00:03.207) 0:02:22.867 ********* 2025-08-29 14:59:20.701336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:59:20.701346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:59:20.701356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 14:59:20.701370 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701377 | orchestrator | 2025-08-29 14:59:20.701384 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 14:59:20.701393 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:00.690) 0:02:23.557 ********* 2025-08-29 14:59:20.701401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:59:20.701408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:59:20.701412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:59:20.701423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:59:20.701430 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:59:20.701449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 14:59:20.701456 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701463 | orchestrator | 2025-08-29 14:59:20.701471 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 14:59:20.701478 | orchestrator | Friday 29 August 2025 14:55:15 +0000 (0:00:00.696) 0:02:24.254 ********* 2025-08-29 14:59:20.701485 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.701490 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.701494 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.701499 | orchestrator | 2025-08-29 14:59:20.701504 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 14:59:20.701513 | orchestrator | Friday 29 August 2025 14:55:16 +0000 (0:00:01.375) 0:02:25.629 ********* 2025-08-29 14:59:20.701517 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.701522 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.701527 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.701531 | orchestrator | 2025-08-29 14:59:20.701546 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 14:59:20.701552 | orchestrator | Friday 29 August 2025 14:55:18 +0000 (0:00:02.155) 0:02:27.785 ********* 2025-08-29 14:59:20.701560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701568 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701577 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701582 | orchestrator | 2025-08-29 14:59:20.701586 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 14:59:20.701591 | orchestrator | Friday 29 August 2025 14:55:19 +0000 (0:00:00.538) 0:02:28.324 ********* 2025-08-29 14:59:20.701596 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.701603 | orchestrator | 2025-08-29 14:59:20.701611 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 14:59:20.701618 | orchestrator | Friday 29 August 2025 14:55:20 +0000 (0:00:00.941) 0:02:29.265 ********* 2025-08-29 14:59:20.701627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:59:20.701650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, '2025-08-29 14:59:20 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:20.701660 | orchestrator | 2025-08-29 14:59:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:20.701665 | orchestrator | mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:59:20.701672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 14:59:20.701685 | orchestrator | 2025-08-29 14:59:20.701693 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 14:59:20.701703 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:03.599) 0:02:32.865 ********* 2025-08-29 14:59:20.701722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:59:20.701729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:59:20.701749 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 14:59:20.701769 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701777 | orchestrator | 2025-08-29 14:59:20.701784 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 14:59:20.701792 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:01.088) 0:02:33.953 ********* 2025-08-29 14:59:20.701800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:59:20.701807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:59:20.701815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:59:20.701831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:59:20.701841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:59:20.701849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:59:20.701864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:59:20.701870 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.701874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:59:20.701881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:59:20.701889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:59:20.701897 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.701941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:59:20.701952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:59:20.701959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 14:59:20.701964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 14:59:20.701974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 14:59:20.701979 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.701984 | orchestrator | 2025-08-29 14:59:20.701988 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 14:59:20.701993 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:01.297) 0:02:35.250 ********* 2025-08-29 14:59:20.701998 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.702002 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.702007 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.702011 | orchestrator | 2025-08-29 14:59:20.702042 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 14:59:20.702046 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:01.571) 0:02:36.822 ********* 2025-08-29 14:59:20.702051 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.702055 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.702060 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.702064 | orchestrator | 2025-08-29 14:59:20.702069 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 14:59:20.702074 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:02.132) 0:02:38.955 ********* 2025-08-29 14:59:20.702078 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702094 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702098 | orchestrator | 2025-08-29 14:59:20.702103 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 14:59:20.702107 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.553) 0:02:39.508 ********* 2025-08-29 14:59:20.702112 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702116 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702121 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702126 | orchestrator | 2025-08-29 14:59:20.702130 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 14:59:20.702135 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.392) 0:02:39.901 ********* 2025-08-29 14:59:20.702139 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.702144 | orchestrator | 2025-08-29 14:59:20.702148 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 14:59:20.702153 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:01.092) 0:02:40.993 ********* 2025-08-29 14:59:20.702169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:59:20.702175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:59:20.702185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:59:20.702190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:59:20.702198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:59:20.702210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:59:20.702219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 14:59:20.702235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:59:20.702243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:59:20.702251 | orchestrator | 2025-08-29 14:59:20.702260 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 14:59:20.702267 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:05.237) 0:02:46.230 ********* 2025-08-29 14:59:20.702278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:59:20.702299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:59:20.702308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:59:20.702323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:59:20.702332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:59:20.702340 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:59:20.702361 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 14:59:20.702383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 14:59:20.702393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 14:59:20.702400 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702408 | orchestrator | 2025-08-29 14:59:20.702416 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 14:59:20.702423 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.660) 0:02:46.891 ********* 2025-08-29 14:59:20.702433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:59:20.702441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:59:20.702446 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:59:20.702455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:59:20.702460 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:59:20.702473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 14:59:20.702478 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702483 | orchestrator | 2025-08-29 14:59:20.702487 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 14:59:20.702492 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.857) 0:02:47.749 ********* 2025-08-29 14:59:20.702496 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.702501 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.702506 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.702510 | orchestrator | 2025-08-29 14:59:20.702515 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 14:59:20.702519 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:01.756) 0:02:49.505 ********* 2025-08-29 14:59:20.702524 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.702537 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.702547 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.702551 | orchestrator | 2025-08-29 14:59:20.702556 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 14:59:20.702561 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:02.304) 0:02:51.809 ********* 2025-08-29 14:59:20.702567 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702574 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702582 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702590 | orchestrator | 2025-08-29 14:59:20.702598 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 14:59:20.702606 | orchestrator | Friday 29 August 2025 14:55:43 +0000 (0:00:00.318) 0:02:52.128 ********* 2025-08-29 14:59:20.702621 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.702626 | orchestrator | 2025-08-29 14:59:20.702631 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 14:59:20.702642 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:01.013) 0:02:53.141 ********* 2025-08-29 14:59:20.702647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:59:20.702653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.702658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:59:20.702666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.702685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 14:59:20.702690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.702695 | orchestrator | 2025-08-29 14:59:20.702700 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 14:59:20.702704 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:03.806) 0:02:56.947 ********* 2025-08-29 14:59:20.702710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:59:20.702722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.702735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:59:20.702760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.702765 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 14:59:20.702775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.702782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702789 | orchestrator | 2025-08-29 14:59:20.702797 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 14:59:20.702804 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:00.686) 0:02:57.633 ********* 2025-08-29 14:59:20.702812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:59:20.702830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:59:20.702835 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.702840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:59:20.702844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:59:20.702849 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.702854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:59:20.702867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 14:59:20.702872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.702877 | orchestrator | 2025-08-29 14:59:20.702881 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 14:59:20.702886 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.952) 0:02:58.586 ********* 2025-08-29 14:59:20.702893 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.702900 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.702924 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.702932 | orchestrator | 2025-08-29 14:59:20.702939 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 14:59:20.702947 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:01.638) 0:03:00.225 ********* 2025-08-29 14:59:20.702954 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.702961 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.702969 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.702976 | orchestrator | 2025-08-29 14:59:20.702982 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 14:59:20.702988 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:02.364) 0:03:02.590 ********* 2025-08-29 14:59:20.702996 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.703008 | orchestrator | 2025-08-29 14:59:20.703015 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 14:59:20.703022 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:01.056) 0:03:03.647 ********* 2025-08-29 14:59:20.703029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:59:20.703036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:59:20.703095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 14:59:20.703136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703169 | orchestrator | 2025-08-29 14:59:20.703173 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 14:59:20.703178 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:04.098) 0:03:07.745 ********* 2025-08-29 14:59:20.703183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:59:20.703193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703220 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:59:20.703230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703249 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 14:59:20.703271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.703290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703295 | orchestrator | 2025-08-29 14:59:20.703299 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 14:59:20.703304 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.978) 0:03:08.724 ********* 2025-08-29 14:59:20.703309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:59:20.703314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:59:20.703319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:59:20.703328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:59:20.703333 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:59:20.703343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 14:59:20.703347 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703352 | orchestrator | 2025-08-29 14:59:20.703357 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 14:59:20.703362 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:00.980) 0:03:09.704 ********* 2025-08-29 14:59:20.703366 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.703371 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.703375 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.703380 | orchestrator | 2025-08-29 14:59:20.703387 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 14:59:20.703392 | orchestrator | Friday 29 August 2025 14:56:02 +0000 (0:00:01.406) 0:03:11.111 ********* 2025-08-29 14:59:20.703396 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.703401 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.703406 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.703410 | orchestrator | 2025-08-29 14:59:20.703415 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 14:59:20.703419 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:02.368) 0:03:13.479 ********* 2025-08-29 14:59:20.703424 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.703428 | orchestrator | 2025-08-29 14:59:20.703433 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 14:59:20.703438 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:01.554) 0:03:15.034 ********* 2025-08-29 14:59:20.703442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 14:59:20.703447 | orchestrator | 2025-08-29 14:59:20.703460 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 14:59:20.703466 | orchestrator | Friday 29 August 2025 14:56:09 +0000 (0:00:03.338) 0:03:18.373 ********* 2025-08-29 14:59:20.703471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:59:20.703483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:59:20.703491 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:59:20.703542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:59:20.703547 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:59:20.703561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:59:20.703566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703573 | orchestrator | 2025-08-29 14:59:20.703580 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 14:59:20.703587 | orchestrator | Friday 29 August 2025 14:56:11 +0000 (0:00:02.487) 0:03:20.860 ********* 2025-08-29 14:59:20.703608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:59:20.703626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:59:20.703634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:59:20.703659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:59:20.703667 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 14:59:20.703678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 14:59:20.703685 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703693 | orchestrator | 2025-08-29 14:59:20.703701 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 14:59:20.703708 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:02.465) 0:03:23.326 ********* 2025-08-29 14:59:20.703718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:59:20.703735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:59:20.703740 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:59:20.703750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:59:20.703755 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:59:20.703765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 14:59:20.703770 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703774 | orchestrator | 2025-08-29 14:59:20.703779 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 14:59:20.703784 | orchestrator | Friday 29 August 2025 14:56:16 +0000 (0:00:02.575) 0:03:25.901 ********* 2025-08-29 14:59:20.703788 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.703793 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.703797 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.703802 | orchestrator | 2025-08-29 14:59:20.703806 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 14:59:20.703813 | orchestrator | Friday 29 August 2025 14:56:18 +0000 (0:00:02.095) 0:03:27.996 ********* 2025-08-29 14:59:20.703820 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703832 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703842 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703848 | orchestrator | 2025-08-29 14:59:20.703858 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 14:59:20.703865 | orchestrator | Friday 29 August 2025 14:56:20 +0000 (0:00:01.522) 0:03:29.518 ********* 2025-08-29 14:59:20.703873 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.703880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.703887 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.703894 | orchestrator | 2025-08-29 14:59:20.703902 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 14:59:20.703949 | orchestrator | Friday 29 August 2025 14:56:21 +0000 (0:00:00.652) 0:03:30.171 ********* 2025-08-29 14:59:20.703975 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.703984 | orchestrator | 2025-08-29 14:59:20.703992 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 14:59:20.704001 | orchestrator | Friday 29 August 2025 14:56:22 +0000 (0:00:01.116) 0:03:31.288 ********* 2025-08-29 14:59:20.704024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:59:20.704031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:59:20.704036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 14:59:20.704041 | orchestrator | 2025-08-29 14:59:20.704045 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 14:59:20.704050 | orchestrator | Friday 29 August 2025 14:56:23 +0000 (0:00:01.490) 0:03:32.778 ********* 2025-08-29 14:59:20.704055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:59:20.704068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:59:20.704074 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.704079 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.704093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 14:59:20.704098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.704103 | orchestrator | 2025-08-29 14:59:20.704107 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 14:59:20.704112 | orchestrator | Friday 29 August 2025 14:56:24 +0000 (0:00:00.707) 0:03:33.486 ********* 2025-08-29 14:59:20.704117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:59:20.704123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:59:20.704128 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.704132 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.704137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 14:59:20.704142 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.704146 | orchestrator | 2025-08-29 14:59:20.704151 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 14:59:20.704156 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:00.656) 0:03:34.143 ********* 2025-08-29 14:59:20.704160 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.704165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.704173 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.704177 | orchestrator | 2025-08-29 14:59:20.704182 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 14:59:20.704187 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:00.428) 0:03:34.571 ********* 2025-08-29 14:59:20.704191 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.704196 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.704200 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.704205 | orchestrator | 2025-08-29 14:59:20.704209 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 14:59:20.704214 | orchestrator | Friday 29 August 2025 14:56:26 +0000 (0:00:01.367) 0:03:35.938 ********* 2025-08-29 14:59:20.704219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.704224 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.704228 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.704233 | orchestrator | 2025-08-29 14:59:20.704237 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 14:59:20.704242 | orchestrator | Friday 29 August 2025 14:56:27 +0000 (0:00:00.579) 0:03:36.518 ********* 2025-08-29 14:59:20.704247 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.704252 | orchestrator | 2025-08-29 14:59:20.704256 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 14:59:20.704261 | orchestrator | Friday 29 August 2025 14:56:28 +0000 (0:00:01.221) 0:03:37.740 ********* 2025-08-29 14:59:20.704268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:59:20.704283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:59:20.704311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:59:20.704321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 14:59:20.704340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:59:20.704433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:59:20.704487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.704563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.704675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.704724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704740 | orchestrator | 2025-08-29 14:59:20.704747 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 14:59:20.704754 | orchestrator | Friday 29 August 2025 14:56:33 +0000 (0:00:04.552) 0:03:42.292 ********* 2025-08-29 14:59:20.704762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:59:20.704774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:59:20.704827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:59:20.704842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.704960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:59:20.704968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.704982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:59:20.704989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.705052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.705072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.705097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705105 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.705114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 14:59:20.705174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.705192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.705218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705248 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.705256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 14:59:20.705264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.705324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 14:59:20.705348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 14:59:20.705388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 14:59:20.705397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705406 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.705414 | orchestrator | 2025-08-29 14:59:20.705421 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 14:59:20.705429 | orchestrator | Friday 29 August 2025 14:56:35 +0000 (0:00:01.796) 0:03:44.088 ********* 2025-08-29 14:59:20.705438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:59:20.705446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:59:20.705453 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.705461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:59:20.705468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:59:20.705476 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.705484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:59:20.705491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 14:59:20.705504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.705512 | orchestrator | 2025-08-29 14:59:20.705519 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 14:59:20.705527 | orchestrator | Friday 29 August 2025 14:56:36 +0000 (0:00:01.846) 0:03:45.935 ********* 2025-08-29 14:59:20.705535 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.705543 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.705550 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.705558 | orchestrator | 2025-08-29 14:59:20.705565 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 14:59:20.705573 | orchestrator | Friday 29 August 2025 14:56:38 +0000 (0:00:01.980) 0:03:47.915 ********* 2025-08-29 14:59:20.705580 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.705588 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.705595 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.705603 | orchestrator | 2025-08-29 14:59:20.705610 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 14:59:20.705618 | orchestrator | Friday 29 August 2025 14:56:40 +0000 (0:00:02.130) 0:03:50.046 ********* 2025-08-29 14:59:20.705625 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.705633 | orchestrator | 2025-08-29 14:59:20.705641 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 14:59:20.705649 | orchestrator | Friday 29 August 2025 14:56:42 +0000 (0:00:01.187) 0:03:51.233 ********* 2025-08-29 14:59:20.705674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.705683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.705692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.705706 | orchestrator | 2025-08-29 14:59:20.705711 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 14:59:20.705715 | orchestrator | Friday 29 August 2025 14:56:45 +0000 (0:00:03.315) 0:03:54.548 ********* 2025-08-29 14:59:20.705720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.705725 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.705734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.705739 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.705754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.705759 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.705764 | orchestrator | 2025-08-29 14:59:20.705769 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 14:59:20.705773 | orchestrator | Friday 29 August 2025 14:56:46 +0000 (0:00:00.866) 0:03:55.415 ********* 2025-08-29 14:59:20.705778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:59:20.705787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:59:20.705793 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.705797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:59:20.705802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:59:20.705807 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.705812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:59:20.705816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 14:59:20.705821 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.705825 | orchestrator | 2025-08-29 14:59:20.705830 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 14:59:20.705835 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:00.762) 0:03:56.178 ********* 2025-08-29 14:59:20.705839 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.705844 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.705848 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.705853 | orchestrator | 2025-08-29 14:59:20.705857 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 14:59:20.705862 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:01.419) 0:03:57.598 ********* 2025-08-29 14:59:20.705867 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.705872 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.705876 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.705881 | orchestrator | 2025-08-29 14:59:20.705886 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 14:59:20.705891 | orchestrator | Friday 29 August 2025 14:56:50 +0000 (0:00:02.147) 0:03:59.745 ********* 2025-08-29 14:59:20.705896 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.705900 | orchestrator | 2025-08-29 14:59:20.705922 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 14:59:20.705928 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:01.682) 0:04:01.428 ********* 2025-08-29 14:59:20.705943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.705953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.705972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.705996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.706001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706011 | orchestrator | 2025-08-29 14:59:20.706063 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 14:59:20.706068 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:05.015) 0:04:06.444 ********* 2025-08-29 14:59:20.706087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.706097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.706120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.706133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706152 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.706161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706165 | orchestrator | 2025-08-29 14:59:20.706170 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 14:59:20.706175 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:00.632) 0:04:07.077 ********* 2025-08-29 14:59:20.706180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706200 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706258 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 14:59:20.706267 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706272 | orchestrator | 2025-08-29 14:59:20.706276 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 14:59:20.706281 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:01.432) 0:04:08.509 ********* 2025-08-29 14:59:20.706286 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.706290 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.706295 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.706299 | orchestrator | 2025-08-29 14:59:20.706304 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 14:59:20.706309 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:01.399) 0:04:09.909 ********* 2025-08-29 14:59:20.706313 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.706318 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.706322 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.706327 | orchestrator | 2025-08-29 14:59:20.706332 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 14:59:20.706336 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:02.219) 0:04:12.128 ********* 2025-08-29 14:59:20.706341 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.706345 | orchestrator | 2025-08-29 14:59:20.706350 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 14:59:20.706355 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:01.562) 0:04:13.690 ********* 2025-08-29 14:59:20.706359 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 14:59:20.706364 | orchestrator | 2025-08-29 14:59:20.706369 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 14:59:20.706373 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.982) 0:04:14.672 ********* 2025-08-29 14:59:20.706378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:59:20.706400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:59:20.706405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 14:59:20.706410 | orchestrator | 2025-08-29 14:59:20.706415 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 14:59:20.706420 | orchestrator | Friday 29 August 2025 14:57:09 +0000 (0:00:04.267) 0:04:18.940 ********* 2025-08-29 14:59:20.706435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706441 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706532 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706537 | orchestrator | 2025-08-29 14:59:20.706542 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 14:59:20.706547 | orchestrator | Friday 29 August 2025 14:57:11 +0000 (0:00:01.699) 0:04:20.640 ********* 2025-08-29 14:59:20.706552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:59:20.706557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:59:20.706566 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:59:20.706576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:59:20.706580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:59:20.706590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 14:59:20.706595 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706599 | orchestrator | 2025-08-29 14:59:20.706608 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:59:20.706612 | orchestrator | Friday 29 August 2025 14:57:13 +0000 (0:00:01.565) 0:04:22.206 ********* 2025-08-29 14:59:20.706617 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.706621 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.706626 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.706630 | orchestrator | 2025-08-29 14:59:20.706635 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:59:20.706639 | orchestrator | Friday 29 August 2025 14:57:15 +0000 (0:00:02.608) 0:04:24.814 ********* 2025-08-29 14:59:20.706644 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.706649 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.706654 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.706658 | orchestrator | 2025-08-29 14:59:20.706663 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 14:59:20.706667 | orchestrator | Friday 29 August 2025 14:57:18 +0000 (0:00:03.029) 0:04:27.843 ********* 2025-08-29 14:59:20.706683 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 14:59:20.706688 | orchestrator | 2025-08-29 14:59:20.706693 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 14:59:20.706697 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:01.469) 0:04:29.312 ********* 2025-08-29 14:59:20.706702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706707 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706721 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706731 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706735 | orchestrator | 2025-08-29 14:59:20.706740 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 14:59:20.706747 | orchestrator | Friday 29 August 2025 14:57:21 +0000 (0:00:01.269) 0:04:30.581 ********* 2025-08-29 14:59:20.706755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706763 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706778 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 14:59:20.706791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706795 | orchestrator | 2025-08-29 14:59:20.706800 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 14:59:20.706806 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:01.362) 0:04:31.944 ********* 2025-08-29 14:59:20.706813 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706833 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.706839 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.706843 | orchestrator | 2025-08-29 14:59:20.706848 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:59:20.706853 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:01.818) 0:04:33.762 ********* 2025-08-29 14:59:20.706857 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.706862 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.706867 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.706872 | orchestrator | 2025-08-29 14:59:20.706876 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:59:20.706886 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:02.280) 0:04:36.043 ********* 2025-08-29 14:59:20.706891 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.706895 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.706900 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.706904 | orchestrator | 2025-08-29 14:59:20.706955 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 14:59:20.706960 | orchestrator | Friday 29 August 2025 14:57:29 +0000 (0:00:02.910) 0:04:38.954 ********* 2025-08-29 14:59:20.706965 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 14:59:20.706969 | orchestrator | 2025-08-29 14:59:20.706974 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 14:59:20.706978 | orchestrator | Friday 29 August 2025 14:57:30 +0000 (0:00:00.902) 0:04:39.856 ********* 2025-08-29 14:59:20.706983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:59:20.706988 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.706993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:59:20.706997 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:59:20.707007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707011 | orchestrator | 2025-08-29 14:59:20.707016 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 14:59:20.707021 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:01.422) 0:04:41.279 ********* 2025-08-29 14:59:20.707029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:59:20.707033 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:59:20.707059 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 14:59:20.707069 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707073 | orchestrator | 2025-08-29 14:59:20.707078 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 14:59:20.707082 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:01.412) 0:04:42.691 ********* 2025-08-29 14:59:20.707087 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707092 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707101 | orchestrator | 2025-08-29 14:59:20.707105 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 14:59:20.707110 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:01.507) 0:04:44.198 ********* 2025-08-29 14:59:20.707115 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.707119 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.707124 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.707128 | orchestrator | 2025-08-29 14:59:20.707133 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 14:59:20.707137 | orchestrator | Friday 29 August 2025 14:57:37 +0000 (0:00:02.659) 0:04:46.858 ********* 2025-08-29 14:59:20.707142 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.707146 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.707151 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.707155 | orchestrator | 2025-08-29 14:59:20.707160 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 14:59:20.707164 | orchestrator | Friday 29 August 2025 14:57:40 +0000 (0:00:03.089) 0:04:49.947 ********* 2025-08-29 14:59:20.707169 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.707174 | orchestrator | 2025-08-29 14:59:20.707178 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 14:59:20.707183 | orchestrator | Friday 29 August 2025 14:57:42 +0000 (0:00:01.662) 0:04:51.610 ********* 2025-08-29 14:59:20.707188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.707193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:59:20.707205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.707231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.707237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.707249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:59:20.707254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:59:20.707268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.707299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.707304 | orchestrator | 2025-08-29 14:59:20.707309 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 14:59:20.707313 | orchestrator | Friday 29 August 2025 14:57:45 +0000 (0:00:03.423) 0:04:55.033 ********* 2025-08-29 14:59:20.707327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.707332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:59:20.707337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.707357 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.707378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:59:20.707383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.707402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.707415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 14:59:20.707430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 14:59:20.707440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 14:59:20.707445 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707450 | orchestrator | 2025-08-29 14:59:20.707454 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 14:59:20.707459 | orchestrator | Friday 29 August 2025 14:57:46 +0000 (0:00:00.859) 0:04:55.892 ********* 2025-08-29 14:59:20.707463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:59:20.707472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:59:20.707477 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:59:20.707486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:59:20.707491 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:59:20.707500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 14:59:20.707505 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707510 | orchestrator | 2025-08-29 14:59:20.707514 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 14:59:20.707522 | orchestrator | Friday 29 August 2025 14:57:47 +0000 (0:00:01.074) 0:04:56.967 ********* 2025-08-29 14:59:20.707527 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.707531 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.707536 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.707540 | orchestrator | 2025-08-29 14:59:20.707545 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 14:59:20.707550 | orchestrator | Friday 29 August 2025 14:57:49 +0000 (0:00:01.353) 0:04:58.320 ********* 2025-08-29 14:59:20.707554 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.707559 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.707564 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.707568 | orchestrator | 2025-08-29 14:59:20.707573 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 14:59:20.707578 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:01.879) 0:05:00.199 ********* 2025-08-29 14:59:20.707582 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.707587 | orchestrator | 2025-08-29 14:59:20.707600 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 14:59:20.707605 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:01.472) 0:05:01.672 ********* 2025-08-29 14:59:20.707611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:59:20.707616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:59:20.707625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 14:59:20.707633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:59:20.707647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:59:20.707654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 14:59:20.707664 | orchestrator | 2025-08-29 14:59:20.707668 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 14:59:20.707673 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:04.747) 0:05:06.419 ********* 2025-08-29 14:59:20.707678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:59:20.707686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:59:20.707691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:59:20.707710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:59:20.707719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 14:59:20.707732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 14:59:20.707738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707742 | orchestrator | 2025-08-29 14:59:20.707747 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 14:59:20.707752 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:00.630) 0:05:07.050 ********* 2025-08-29 14:59:20.707765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:59:20.707770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:59:20.707776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:59:20.707784 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:59:20.707794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:59:20.707799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:59:20.707804 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 14:59:20.707813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:59:20.707818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 14:59:20.707822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707827 | orchestrator | 2025-08-29 14:59:20.707832 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 14:59:20.707836 | orchestrator | Friday 29 August 2025 14:57:59 +0000 (0:00:01.642) 0:05:08.693 ********* 2025-08-29 14:59:20.707841 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707846 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707850 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707855 | orchestrator | 2025-08-29 14:59:20.707860 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 14:59:20.707864 | orchestrator | Friday 29 August 2025 14:58:00 +0000 (0:00:00.460) 0:05:09.153 ********* 2025-08-29 14:59:20.707869 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.707873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.707878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.707882 | orchestrator | 2025-08-29 14:59:20.707887 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 14:59:20.707891 | orchestrator | Friday 29 August 2025 14:58:01 +0000 (0:00:01.376) 0:05:10.529 ********* 2025-08-29 14:59:20.707896 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.707901 | orchestrator | 2025-08-29 14:59:20.707919 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 14:59:20.707927 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:01.706) 0:05:12.235 ********* 2025-08-29 14:59:20.707938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:59:20.707967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:59:20.707976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.707984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.707992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:59:20.707997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:59:20.708011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 14:59:20.708033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:59:20.708038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:59:20.708085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:59:20.708090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:59:20.708125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:59:20.708130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 14:59:20.708135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:59:20.708153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708181 | orchestrator | 2025-08-29 14:59:20.708186 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 14:59:20.708190 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:04.366) 0:05:16.602 ********* 2025-08-29 14:59:20.708195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:59:20.708200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:59:20.708211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:59:20.708229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:59:20.708239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:59:20.708250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:59:20.708264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:59:20.708286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:59:20.708302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 14:59:20.708324 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 14:59:20.708343 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 14:59:20.708377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 14:59:20.708386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:59:20.708407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 14:59:20.708413 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708418 | orchestrator | 2025-08-29 14:59:20.708423 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 14:59:20.708427 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.854) 0:05:17.457 ********* 2025-08-29 14:59:20.708432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:59:20.708437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:59:20.708442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:59:20.708447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:59:20.708451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:59:20.708461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:59:20.708465 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:59:20.708475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:59:20.708480 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 14:59:20.708489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 14:59:20.708497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:59:20.708501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 14:59:20.708507 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708515 | orchestrator | 2025-08-29 14:59:20.708524 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 14:59:20.708529 | orchestrator | Friday 29 August 2025 14:58:09 +0000 (0:00:01.353) 0:05:18.810 ********* 2025-08-29 14:59:20.708534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708538 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708546 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708551 | orchestrator | 2025-08-29 14:59:20.708555 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 14:59:20.708560 | orchestrator | Friday 29 August 2025 14:58:10 +0000 (0:00:00.481) 0:05:19.291 ********* 2025-08-29 14:59:20.708564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708569 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708574 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708578 | orchestrator | 2025-08-29 14:59:20.708583 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 14:59:20.708587 | orchestrator | Friday 29 August 2025 14:58:11 +0000 (0:00:01.375) 0:05:20.667 ********* 2025-08-29 14:59:20.708592 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.708596 | orchestrator | 2025-08-29 14:59:20.708601 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 14:59:20.708606 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:01.731) 0:05:22.398 ********* 2025-08-29 14:59:20.708614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:59:20.708620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:59:20.708628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:59:20.708634 | orchestrator | 2025-08-29 14:59:20.708638 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 14:59:20.708643 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:02.232) 0:05:24.631 ********* 2025-08-29 14:59:20.708651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:59:20.708660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:59:20.708670 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 14:59:20.708679 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708684 | orchestrator | 2025-08-29 14:59:20.708688 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 14:59:20.708693 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:00.413) 0:05:25.044 ********* 2025-08-29 14:59:20.708697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:59:20.708702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:59:20.708710 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 14:59:20.708724 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708728 | orchestrator | 2025-08-29 14:59:20.708733 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 14:59:20.708738 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:00.705) 0:05:25.750 ********* 2025-08-29 14:59:20.708742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708752 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708756 | orchestrator | 2025-08-29 14:59:20.708761 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 14:59:20.708769 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:00.845) 0:05:26.596 ********* 2025-08-29 14:59:20.708774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708782 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708791 | orchestrator | 2025-08-29 14:59:20.708796 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 14:59:20.708800 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:01.345) 0:05:27.942 ********* 2025-08-29 14:59:20.708805 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:59:20.708809 | orchestrator | 2025-08-29 14:59:20.708814 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 14:59:20.708818 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:01.481) 0:05:29.423 ********* 2025-08-29 14:59:20.708823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.708828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.708833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.708844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.708853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.708858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 14:59:20.708863 | orchestrator | 2025-08-29 14:59:20.708868 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 14:59:20.708872 | orchestrator | Friday 29 August 2025 14:58:27 +0000 (0:00:06.786) 0:05:36.210 ********* 2025-08-29 14:59:20.708877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.708885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.708900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.708926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.708932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.708937 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.708942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.708950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 14:59:20.708960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.708964 | orchestrator | 2025-08-29 14:59:20.708969 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 14:59:20.708974 | orchestrator | Friday 29 August 2025 14:58:27 +0000 (0:00:00.665) 0:05:36.875 ********* 2025-08-29 14:59:20.708982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:59:20.708987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:59:20.708991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:59:20.708996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709001 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709024 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 14:59:20.709064 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709069 | orchestrator | 2025-08-29 14:59:20.709074 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 14:59:20.709078 | orchestrator | Friday 29 August 2025 14:58:28 +0000 (0:00:00.964) 0:05:37.840 ********* 2025-08-29 14:59:20.709083 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.709087 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.709092 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.709096 | orchestrator | 2025-08-29 14:59:20.709101 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 14:59:20.709105 | orchestrator | Friday 29 August 2025 14:58:30 +0000 (0:00:02.166) 0:05:40.006 ********* 2025-08-29 14:59:20.709110 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.709114 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.709119 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.709123 | orchestrator | 2025-08-29 14:59:20.709131 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 14:59:20.709136 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:02.204) 0:05:42.211 ********* 2025-08-29 14:59:20.709140 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709145 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709149 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709154 | orchestrator | 2025-08-29 14:59:20.709158 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 14:59:20.709163 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.362) 0:05:42.573 ********* 2025-08-29 14:59:20.709167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709172 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709176 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709181 | orchestrator | 2025-08-29 14:59:20.709185 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 14:59:20.709190 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.315) 0:05:42.889 ********* 2025-08-29 14:59:20.709194 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709199 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709206 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709211 | orchestrator | 2025-08-29 14:59:20.709216 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 14:59:20.709220 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.339) 0:05:43.228 ********* 2025-08-29 14:59:20.709225 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709229 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709234 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709238 | orchestrator | 2025-08-29 14:59:20.709243 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 14:59:20.709247 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.751) 0:05:43.979 ********* 2025-08-29 14:59:20.709252 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709256 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709261 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709265 | orchestrator | 2025-08-29 14:59:20.709270 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 14:59:20.709274 | orchestrator | Friday 29 August 2025 14:58:35 +0000 (0:00:00.359) 0:05:44.339 ********* 2025-08-29 14:59:20.709279 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709283 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709288 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709292 | orchestrator | 2025-08-29 14:59:20.709297 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 14:59:20.709301 | orchestrator | Friday 29 August 2025 14:58:35 +0000 (0:00:00.555) 0:05:44.894 ********* 2025-08-29 14:59:20.709306 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709325 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709329 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709334 | orchestrator | 2025-08-29 14:59:20.709338 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 14:59:20.709343 | orchestrator | Friday 29 August 2025 14:58:36 +0000 (0:00:01.014) 0:05:45.909 ********* 2025-08-29 14:59:20.709348 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709352 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709357 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709361 | orchestrator | 2025-08-29 14:59:20.709366 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 14:59:20.709370 | orchestrator | Friday 29 August 2025 14:58:37 +0000 (0:00:00.370) 0:05:46.280 ********* 2025-08-29 14:59:20.709375 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709379 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709384 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709388 | orchestrator | 2025-08-29 14:59:20.709394 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 14:59:20.709398 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:00.836) 0:05:47.116 ********* 2025-08-29 14:59:20.709403 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709407 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709412 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709416 | orchestrator | 2025-08-29 14:59:20.709421 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 14:59:20.709426 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:00.826) 0:05:47.943 ********* 2025-08-29 14:59:20.709430 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709435 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709439 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709444 | orchestrator | 2025-08-29 14:59:20.709448 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 14:59:20.709453 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:01.184) 0:05:49.127 ********* 2025-08-29 14:59:20.709457 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.709462 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.709467 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.709471 | orchestrator | 2025-08-29 14:59:20.709476 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 14:59:20.709480 | orchestrator | Friday 29 August 2025 14:58:50 +0000 (0:00:10.270) 0:05:59.397 ********* 2025-08-29 14:59:20.709485 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709489 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709494 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709498 | orchestrator | 2025-08-29 14:59:20.709503 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 14:59:20.709508 | orchestrator | Friday 29 August 2025 14:58:51 +0000 (0:00:00.816) 0:06:00.213 ********* 2025-08-29 14:59:20.709512 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.709517 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.709521 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.709526 | orchestrator | 2025-08-29 14:59:20.709530 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 14:59:20.709535 | orchestrator | Friday 29 August 2025 14:59:03 +0000 (0:00:12.810) 0:06:13.024 ********* 2025-08-29 14:59:20.709539 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709544 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709549 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709553 | orchestrator | 2025-08-29 14:59:20.709558 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 14:59:20.709565 | orchestrator | Friday 29 August 2025 14:59:04 +0000 (0:00:00.812) 0:06:13.836 ********* 2025-08-29 14:59:20.709570 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:59:20.709574 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:59:20.709579 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:59:20.709583 | orchestrator | 2025-08-29 14:59:20.709591 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 14:59:20.709595 | orchestrator | Friday 29 August 2025 14:59:14 +0000 (0:00:09.898) 0:06:23.735 ********* 2025-08-29 14:59:20.709600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709604 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709609 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709614 | orchestrator | 2025-08-29 14:59:20.709618 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 14:59:20.709623 | orchestrator | Friday 29 August 2025 14:59:15 +0000 (0:00:00.372) 0:06:24.108 ********* 2025-08-29 14:59:20.709627 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709632 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709641 | orchestrator | 2025-08-29 14:59:20.709649 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 14:59:20.709653 | orchestrator | Friday 29 August 2025 14:59:15 +0000 (0:00:00.376) 0:06:24.484 ********* 2025-08-29 14:59:20.709658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709672 | orchestrator | 2025-08-29 14:59:20.709676 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 14:59:20.709681 | orchestrator | Friday 29 August 2025 14:59:15 +0000 (0:00:00.359) 0:06:24.844 ********* 2025-08-29 14:59:20.709686 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709690 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709695 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709699 | orchestrator | 2025-08-29 14:59:20.709704 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 14:59:20.709708 | orchestrator | Friday 29 August 2025 14:59:16 +0000 (0:00:00.756) 0:06:25.601 ********* 2025-08-29 14:59:20.709713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709717 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709722 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709727 | orchestrator | 2025-08-29 14:59:20.709731 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 14:59:20.709736 | orchestrator | Friday 29 August 2025 14:59:16 +0000 (0:00:00.360) 0:06:25.961 ********* 2025-08-29 14:59:20.709740 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:59:20.709745 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:59:20.709749 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:59:20.709754 | orchestrator | 2025-08-29 14:59:20.709759 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 14:59:20.709763 | orchestrator | Friday 29 August 2025 14:59:17 +0000 (0:00:00.357) 0:06:26.319 ********* 2025-08-29 14:59:20.709768 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709772 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709777 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709781 | orchestrator | 2025-08-29 14:59:20.709786 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 14:59:20.709790 | orchestrator | Friday 29 August 2025 14:59:18 +0000 (0:00:01.302) 0:06:27.621 ********* 2025-08-29 14:59:20.709795 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:59:20.709799 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:59:20.709804 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:59:20.709809 | orchestrator | 2025-08-29 14:59:20.709813 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:59:20.709818 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:59:20.709823 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:59:20.709827 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 14:59:20.709836 | orchestrator | 2025-08-29 14:59:20.709841 | orchestrator | 2025-08-29 14:59:20.709845 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:59:20.709850 | orchestrator | Friday 29 August 2025 14:59:19 +0000 (0:00:01.269) 0:06:28.891 ********* 2025-08-29 14:59:20.709855 | orchestrator | =============================================================================== 2025-08-29 14:59:20.709859 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.81s 2025-08-29 14:59:20.709864 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.27s 2025-08-29 14:59:20.709868 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.90s 2025-08-29 14:59:20.709873 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.79s 2025-08-29 14:59:20.709877 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.52s 2025-08-29 14:59:20.709882 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.24s 2025-08-29 14:59:20.709886 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.24s 2025-08-29 14:59:20.709891 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.02s 2025-08-29 14:59:20.709896 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.75s 2025-08-29 14:59:20.709900 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.55s 2025-08-29 14:59:20.709905 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.37s 2025-08-29 14:59:20.709928 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.27s 2025-08-29 14:59:20.709932 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.10s 2025-08-29 14:59:20.709937 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.08s 2025-08-29 14:59:20.709942 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.81s 2025-08-29 14:59:20.709946 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.81s 2025-08-29 14:59:20.709951 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.75s 2025-08-29 14:59:20.709955 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.67s 2025-08-29 14:59:20.709960 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.65s 2025-08-29 14:59:20.709965 | orchestrator | loadbalancer : Ensuring config directories exist ------------------------ 3.65s 2025-08-29 14:59:23.755507 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:23.755611 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:23.755620 | orchestrator | 2025-08-29 14:59:23 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:23.755629 | orchestrator | 2025-08-29 14:59:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:26.806736 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:26.807671 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:26.809411 | orchestrator | 2025-08-29 14:59:26 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:26.809627 | orchestrator | 2025-08-29 14:59:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:29.859584 | orchestrator | 2025-08-29 14:59:29 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:29.861836 | orchestrator | 2025-08-29 14:59:29 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:29.864689 | orchestrator | 2025-08-29 14:59:29 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:29.865918 | orchestrator | 2025-08-29 14:59:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:32.902257 | orchestrator | 2025-08-29 14:59:32 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:32.903181 | orchestrator | 2025-08-29 14:59:32 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:32.905231 | orchestrator | 2025-08-29 14:59:32 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:32.905291 | orchestrator | 2025-08-29 14:59:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:35.941863 | orchestrator | 2025-08-29 14:59:35 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:35.941968 | orchestrator | 2025-08-29 14:59:35 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:35.943986 | orchestrator | 2025-08-29 14:59:35 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:35.944012 | orchestrator | 2025-08-29 14:59:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:38.976238 | orchestrator | 2025-08-29 14:59:38 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:38.976621 | orchestrator | 2025-08-29 14:59:38 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:38.977754 | orchestrator | 2025-08-29 14:59:38 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:38.977785 | orchestrator | 2025-08-29 14:59:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:42.033082 | orchestrator | 2025-08-29 14:59:42 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:42.034155 | orchestrator | 2025-08-29 14:59:42 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:42.035553 | orchestrator | 2025-08-29 14:59:42 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:42.035589 | orchestrator | 2025-08-29 14:59:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:45.074122 | orchestrator | 2025-08-29 14:59:45 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:45.074250 | orchestrator | 2025-08-29 14:59:45 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:45.074764 | orchestrator | 2025-08-29 14:59:45 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:45.074991 | orchestrator | 2025-08-29 14:59:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:48.107503 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:48.107815 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:48.108789 | orchestrator | 2025-08-29 14:59:48 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:48.108816 | orchestrator | 2025-08-29 14:59:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:51.157310 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:51.159196 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:51.161278 | orchestrator | 2025-08-29 14:59:51 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:51.161544 | orchestrator | 2025-08-29 14:59:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:54.205825 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:54.210684 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:54.212391 | orchestrator | 2025-08-29 14:59:54 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:54.212433 | orchestrator | 2025-08-29 14:59:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:57.253972 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 14:59:57.254223 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 14:59:57.254958 | orchestrator | 2025-08-29 14:59:57 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 14:59:57.255636 | orchestrator | 2025-08-29 14:59:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:00.306390 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:00.310188 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:00.312081 | orchestrator | 2025-08-29 15:00:00 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:00.312603 | orchestrator | 2025-08-29 15:00:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:03.361897 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:03.365134 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:03.368085 | orchestrator | 2025-08-29 15:00:03 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:03.368174 | orchestrator | 2025-08-29 15:00:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:06.414468 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:06.415784 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:06.416741 | orchestrator | 2025-08-29 15:00:06 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:06.416931 | orchestrator | 2025-08-29 15:00:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:09.461581 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:09.463305 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:09.465839 | orchestrator | 2025-08-29 15:00:09 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:09.466288 | orchestrator | 2025-08-29 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:12.513074 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:12.514078 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:12.516165 | orchestrator | 2025-08-29 15:00:12 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:12.516264 | orchestrator | 2025-08-29 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:15.546146 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:15.546553 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:15.548593 | orchestrator | 2025-08-29 15:00:15 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:15.548653 | orchestrator | 2025-08-29 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:18.595059 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:18.598202 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:18.600141 | orchestrator | 2025-08-29 15:00:18 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:18.600334 | orchestrator | 2025-08-29 15:00:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:21.644573 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:21.645565 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:21.649162 | orchestrator | 2025-08-29 15:00:21 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:21.649220 | orchestrator | 2025-08-29 15:00:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:24.698442 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:24.700475 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:24.702636 | orchestrator | 2025-08-29 15:00:24 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:24.702787 | orchestrator | 2025-08-29 15:00:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:27.745342 | orchestrator | 2025-08-29 15:00:27 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:27.746769 | orchestrator | 2025-08-29 15:00:27 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:27.749125 | orchestrator | 2025-08-29 15:00:27 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:27.749199 | orchestrator | 2025-08-29 15:00:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:30.797814 | orchestrator | 2025-08-29 15:00:30 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:30.799818 | orchestrator | 2025-08-29 15:00:30 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:30.804006 | orchestrator | 2025-08-29 15:00:30 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:30.804082 | orchestrator | 2025-08-29 15:00:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:33.845328 | orchestrator | 2025-08-29 15:00:33 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:33.847044 | orchestrator | 2025-08-29 15:00:33 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:33.849540 | orchestrator | 2025-08-29 15:00:33 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:33.849590 | orchestrator | 2025-08-29 15:00:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:36.888920 | orchestrator | 2025-08-29 15:00:36 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:36.888996 | orchestrator | 2025-08-29 15:00:36 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:36.890582 | orchestrator | 2025-08-29 15:00:36 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:36.890610 | orchestrator | 2025-08-29 15:00:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:39.942527 | orchestrator | 2025-08-29 15:00:39 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:39.945023 | orchestrator | 2025-08-29 15:00:39 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:39.946888 | orchestrator | 2025-08-29 15:00:39 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:39.947328 | orchestrator | 2025-08-29 15:00:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:43.009760 | orchestrator | 2025-08-29 15:00:43 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:43.011964 | orchestrator | 2025-08-29 15:00:43 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:43.014238 | orchestrator | 2025-08-29 15:00:43 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:43.014386 | orchestrator | 2025-08-29 15:00:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:46.055561 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:46.056624 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:46.057397 | orchestrator | 2025-08-29 15:00:46 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:46.057439 | orchestrator | 2025-08-29 15:00:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:49.104503 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:49.106237 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:49.108009 | orchestrator | 2025-08-29 15:00:49 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:49.108454 | orchestrator | 2025-08-29 15:00:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:52.158573 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:52.159079 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:52.160431 | orchestrator | 2025-08-29 15:00:52 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:52.160520 | orchestrator | 2025-08-29 15:00:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:55.210374 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:55.210657 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:55.211533 | orchestrator | 2025-08-29 15:00:55 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:55.211638 | orchestrator | 2025-08-29 15:00:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:58.269613 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:00:58.270227 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:00:58.271945 | orchestrator | 2025-08-29 15:00:58 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:00:58.272144 | orchestrator | 2025-08-29 15:00:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:01.306278 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:01.307704 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:01.309654 | orchestrator | 2025-08-29 15:01:01 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:01.309700 | orchestrator | 2025-08-29 15:01:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:04.355897 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:04.357001 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:04.360045 | orchestrator | 2025-08-29 15:01:04 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:04.360094 | orchestrator | 2025-08-29 15:01:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:07.405751 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:07.406705 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:07.407965 | orchestrator | 2025-08-29 15:01:07 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:07.407999 | orchestrator | 2025-08-29 15:01:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:10.461597 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:10.462719 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:10.465386 | orchestrator | 2025-08-29 15:01:10 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:10.465453 | orchestrator | 2025-08-29 15:01:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:13.515045 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:13.516158 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:13.517841 | orchestrator | 2025-08-29 15:01:13 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:13.517919 | orchestrator | 2025-08-29 15:01:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:16.565605 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:16.566673 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:16.568384 | orchestrator | 2025-08-29 15:01:16 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:16.568421 | orchestrator | 2025-08-29 15:01:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:19.613884 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:19.615928 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:19.618059 | orchestrator | 2025-08-29 15:01:19 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:19.618119 | orchestrator | 2025-08-29 15:01:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:22.665989 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:22.668569 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:22.671409 | orchestrator | 2025-08-29 15:01:22 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:22.671481 | orchestrator | 2025-08-29 15:01:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:25.722419 | orchestrator | 2025-08-29 15:01:25 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:25.724305 | orchestrator | 2025-08-29 15:01:25 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:25.726212 | orchestrator | 2025-08-29 15:01:25 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:25.726292 | orchestrator | 2025-08-29 15:01:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:28.772064 | orchestrator | 2025-08-29 15:01:28 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:28.773571 | orchestrator | 2025-08-29 15:01:28 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state STARTED 2025-08-29 15:01:28.775332 | orchestrator | 2025-08-29 15:01:28 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:28.775388 | orchestrator | 2025-08-29 15:01:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:31.813428 | orchestrator | 2025-08-29 15:01:31 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:31.820710 | orchestrator | 2025-08-29 15:01:31 | INFO  | Task 690c9b85-b454-45eb-bbe1-417344288ec7 is in state SUCCESS 2025-08-29 15:01:31.822468 | orchestrator | 2025-08-29 15:01:31.822508 | orchestrator | 2025-08-29 15:01:31.822514 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 15:01:31.822520 | orchestrator | 2025-08-29 15:01:31.822525 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:01:31.822531 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:00.811) 0:00:00.811 ********* 2025-08-29 15:01:31.822537 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.822542 | orchestrator | 2025-08-29 15:01:31.822558 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:01:31.822563 | orchestrator | Friday 29 August 2025 14:49:51 +0000 (0:00:01.241) 0:00:02.053 ********* 2025-08-29 15:01:31.822568 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822574 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822578 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822583 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822587 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822592 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822596 | orchestrator | 2025-08-29 15:01:31.822601 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:01:31.822605 | orchestrator | Friday 29 August 2025 14:49:53 +0000 (0:00:01.862) 0:00:03.916 ********* 2025-08-29 15:01:31.822610 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822614 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822619 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822640 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822645 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822650 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822655 | orchestrator | 2025-08-29 15:01:31.822660 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:01:31.822664 | orchestrator | Friday 29 August 2025 14:49:53 +0000 (0:00:00.638) 0:00:04.555 ********* 2025-08-29 15:01:31.822669 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822673 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822678 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822682 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822687 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822691 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822696 | orchestrator | 2025-08-29 15:01:31.822700 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:01:31.822705 | orchestrator | Friday 29 August 2025 14:49:54 +0000 (0:00:00.842) 0:00:05.398 ********* 2025-08-29 15:01:31.822710 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822714 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822719 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822723 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822727 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822732 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822736 | orchestrator | 2025-08-29 15:01:31.822741 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:01:31.822746 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:00.673) 0:00:06.071 ********* 2025-08-29 15:01:31.822750 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822792 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822797 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822801 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822806 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822810 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822815 | orchestrator | 2025-08-29 15:01:31.822819 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:01:31.822824 | orchestrator | Friday 29 August 2025 14:49:55 +0000 (0:00:00.621) 0:00:06.693 ********* 2025-08-29 15:01:31.822828 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822833 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822837 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822842 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822846 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822851 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822855 | orchestrator | 2025-08-29 15:01:31.822860 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:01:31.822865 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:01.037) 0:00:07.730 ********* 2025-08-29 15:01:31.822869 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.822875 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.822879 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.822884 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.822888 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.822893 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.822897 | orchestrator | 2025-08-29 15:01:31.822902 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:01:31.822906 | orchestrator | Friday 29 August 2025 14:49:57 +0000 (0:00:00.854) 0:00:08.585 ********* 2025-08-29 15:01:31.822911 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822915 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822920 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822925 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.822929 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.822933 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.822938 | orchestrator | 2025-08-29 15:01:31.822943 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:01:31.822952 | orchestrator | Friday 29 August 2025 14:49:59 +0000 (0:00:01.288) 0:00:09.873 ********* 2025-08-29 15:01:31.822956 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:31.822961 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:01:31.822966 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:01:31.822970 | orchestrator | 2025-08-29 15:01:31.822975 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:01:31.822979 | orchestrator | Friday 29 August 2025 14:49:59 +0000 (0:00:00.771) 0:00:10.644 ********* 2025-08-29 15:01:31.822984 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.822988 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.822993 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.822997 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.823002 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.823006 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.823011 | orchestrator | 2025-08-29 15:01:31.823024 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:01:31.823029 | orchestrator | Friday 29 August 2025 14:50:01 +0000 (0:00:01.122) 0:00:11.767 ********* 2025-08-29 15:01:31.823033 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:31.823038 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:01:31.823042 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:01:31.823047 | orchestrator | 2025-08-29 15:01:31.823051 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:01:31.823091 | orchestrator | Friday 29 August 2025 14:50:04 +0000 (0:00:03.044) 0:00:14.812 ********* 2025-08-29 15:01:31.823097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:01:31.823102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:01:31.823107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:01:31.823112 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823117 | orchestrator | 2025-08-29 15:01:31.823123 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:01:31.823128 | orchestrator | Friday 29 August 2025 14:50:05 +0000 (0:00:01.347) 0:00:16.160 ********* 2025-08-29 15:01:31.823134 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823147 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823153 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823158 | orchestrator | 2025-08-29 15:01:31.823163 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:01:31.823168 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:00.895) 0:00:17.056 ********* 2025-08-29 15:01:31.823175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823232 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823237 | orchestrator | 2025-08-29 15:01:31.823242 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:01:31.823247 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:00.410) 0:00:17.467 ********* 2025-08-29 15:01:31.823255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 14:50:01.852849', 'end': '2025-08-29 14:50:02.161343', 'delta': '0:00:00.308494', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823270 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 14:50:02.720634', 'end': '2025-08-29 14:50:02.996383', 'delta': '0:00:00.275749', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 14:50:03.494191', 'end': '2025-08-29 14:50:03.770973', 'delta': '0:00:00.276782', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.823282 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823288 | orchestrator | 2025-08-29 15:01:31.823293 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:01:31.823298 | orchestrator | Friday 29 August 2025 14:50:07 +0000 (0:00:00.368) 0:00:17.835 ********* 2025-08-29 15:01:31.823304 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.823309 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.823314 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.823319 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.823324 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.823332 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.823337 | orchestrator | 2025-08-29 15:01:31.823342 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:01:31.823346 | orchestrator | Friday 29 August 2025 14:50:09 +0000 (0:00:02.346) 0:00:20.182 ********* 2025-08-29 15:01:31.823351 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.823421 | orchestrator | 2025-08-29 15:01:31.823426 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:01:31.823430 | orchestrator | Friday 29 August 2025 14:50:10 +0000 (0:00:00.705) 0:00:20.888 ********* 2025-08-29 15:01:31.823435 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823439 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823448 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823453 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823457 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823462 | orchestrator | 2025-08-29 15:01:31.823467 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:01:31.823471 | orchestrator | Friday 29 August 2025 14:50:11 +0000 (0:00:01.345) 0:00:22.234 ********* 2025-08-29 15:01:31.823476 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823480 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823485 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823489 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823494 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823498 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823503 | orchestrator | 2025-08-29 15:01:31.823508 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:01:31.823512 | orchestrator | Friday 29 August 2025 14:50:12 +0000 (0:00:01.370) 0:00:23.604 ********* 2025-08-29 15:01:31.823517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823521 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823526 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823530 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823535 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823539 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823544 | orchestrator | 2025-08-29 15:01:31.823549 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:01:31.823553 | orchestrator | Friday 29 August 2025 14:50:13 +0000 (0:00:00.875) 0:00:24.480 ********* 2025-08-29 15:01:31.823558 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823562 | orchestrator | 2025-08-29 15:01:31.823567 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:01:31.823571 | orchestrator | Friday 29 August 2025 14:50:13 +0000 (0:00:00.138) 0:00:24.618 ********* 2025-08-29 15:01:31.823576 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823580 | orchestrator | 2025-08-29 15:01:31.823585 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:01:31.823589 | orchestrator | Friday 29 August 2025 14:50:14 +0000 (0:00:00.222) 0:00:24.840 ********* 2025-08-29 15:01:31.823594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823599 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823603 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823607 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823612 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823621 | orchestrator | 2025-08-29 15:01:31.823626 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:01:31.823633 | orchestrator | Friday 29 August 2025 14:50:14 +0000 (0:00:00.642) 0:00:25.483 ********* 2025-08-29 15:01:31.823638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823643 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823660 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823664 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823669 | orchestrator | 2025-08-29 15:01:31.823674 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:01:31.823678 | orchestrator | Friday 29 August 2025 14:50:15 +0000 (0:00:00.994) 0:00:26.477 ********* 2025-08-29 15:01:31.823685 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823690 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823695 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823699 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823704 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823708 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823713 | orchestrator | 2025-08-29 15:01:31.823717 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:01:31.823722 | orchestrator | Friday 29 August 2025 14:50:16 +0000 (0:00:01.078) 0:00:27.556 ********* 2025-08-29 15:01:31.823726 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823731 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823770 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823775 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823780 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823784 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823789 | orchestrator | 2025-08-29 15:01:31.823793 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:01:31.823798 | orchestrator | Friday 29 August 2025 14:50:18 +0000 (0:00:01.337) 0:00:28.894 ********* 2025-08-29 15:01:31.823802 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823807 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823811 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823820 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823825 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823829 | orchestrator | 2025-08-29 15:01:31.823834 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:01:31.823838 | orchestrator | Friday 29 August 2025 14:50:19 +0000 (0:00:01.336) 0:00:30.230 ********* 2025-08-29 15:01:31.823843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823852 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823856 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823861 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823865 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823870 | orchestrator | 2025-08-29 15:01:31.823874 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:01:31.823879 | orchestrator | Friday 29 August 2025 14:50:20 +0000 (0:00:01.084) 0:00:31.315 ********* 2025-08-29 15:01:31.823883 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.823888 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.823892 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.823925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.823930 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.823935 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.823939 | orchestrator | 2025-08-29 15:01:31.823944 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:01:31.823949 | orchestrator | Friday 29 August 2025 14:50:21 +0000 (0:00:01.140) 0:00:32.456 ********* 2025-08-29 15:01:31.823953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.823999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part1', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part14', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part15', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part16', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.824704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3', 'dm-uuid-LVM-uD6Mo9vRae6rhHQ3Cv8iBIHiOkh7vDv3P02FpXK4GRvrM2StMq05gwLQahS4Aim9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4', 'dm-uuid-LVM-OpprThIuZ7OCUBOX6wZncT3Dym3eACA2PsddSGncHVfpnqMc8ruraJK2Q8IEJ5jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OOqYyD-X3ep-idi1-Ed6C-DyzY-wRSz-fgidv8', 'scsi-0QEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714', 'scsi-SQEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N17mz5-EUQz-V9n7-C4vu-3ISy-nma3-edJzNt', 'scsi-0QEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c', 'scsi-SQEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34', 'scsi-SQEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.824887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc', 'dm-uuid-LVM-201yAH0joyzRFH6sqEqXj7oSaWavLiqWRSVrbAZzOt1xNf7XwuDo3oXGgvcSNdIa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca', 'dm-uuid-LVM-gEFnHyeHdbJqeHuGQxKJcMhhl1Ir8Lgl2cv6rd0M49f0CvZBMvhuDshIUQj7B0B8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824896 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824911 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.824919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824937 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.824946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PK1oQc-PafE-SXTJ-gC8J-TqLc-GIC3-HqLAAe', 'scsi-0QEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b', 'scsi-SQEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5', 'dm-uuid-LVM-ntkdiD7zsbM03QLUVmvmszSkPpDq2T3WNBLwRo0cmvTQbmNZXYXSsFfmJNZl8Ng2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Exi2ct-gAdU-6Qq1-Ctrc-d3jT-eYnt-ALlvmg', 'scsi-0QEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95', 'scsi-SQEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791', 'dm-uuid-LVM-mlP7WRc7Ld5D4hI6Q71tFUCAmKO8L6bM0SFe3GwFSt363EKsLiZu4Xr2Fcm4SqAg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.824991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d', 'scsi-SQEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.824996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.825010 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.825019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:01:31.825063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.825072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BtT3it-8JPO-VgWx-exfl-04Wt-TZQB-eSuhRn', 'scsi-0QEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9', 'scsi-SQEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.825077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IUdtJC-z0Mo-rn1o-MAmW-S78C-2oty-9gBk4d', 'scsi-0QEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b', 'scsi-SQEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.825082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598', 'scsi-SQEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.825087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:01:31.825094 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.825098 | orchestrator | 2025-08-29 15:01:31.825104 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:01:31.825109 | orchestrator | Friday 29 August 2025 14:50:23 +0000 (0:00:02.210) 0:00:34.666 ********* 2025-08-29 15:01:31.825116 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825129 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825134 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825139 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825144 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825152 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825262 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part1', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part14', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part15', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part16', 'scsi-SQEMU_QEMU_HARDDISK_39f7f322-d91e-4501-9bc1-b2112ccf4f55-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825270 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825276 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.825287 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825292 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825297 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825303 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825309 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825315 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825325 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825336 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825342 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7332294e-ecb4-4364-9b00-941f8f59b6c8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825348 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825359 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.825370 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825376 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825382 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825387 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825398 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825406 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825418 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825423 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb5dd987-b987-4c6b-9e7a-49313a8a95d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825428 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.825448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3', 'dm-uuid-LVM-uD6Mo9vRae6rhHQ3Cv8iBIHiOkh7vDv3P02FpXK4GRvrM2StMq05gwLQahS4Aim9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4', 'dm-uuid-LVM-OpprThIuZ7OCUBOX6wZncT3Dym3eACA2PsddSGncHVfpnqMc8ruraJK2Q8IEJ5jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc', 'dm-uuid-LVM-201yAH0joyzRFH6sqEqXj7oSaWavLiqWRSVrbAZzOt1xNf7XwuDo3oXGgvcSNdIa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca', 'dm-uuid-LVM-gEFnHyeHdbJqeHuGQxKJcMhhl1Ir8Lgl2cv6rd0M49f0CvZBMvhuDshIUQj7B0B8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825502 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825512 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OOqYyD-X3ep-idi1-Ed6C-DyzY-wRSz-fgidv8', 'scsi-0QEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714', 'scsi-SQEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N17mz5-EUQz-V9n7-C4vu-3ISy-nma3-edJzNt', 'scsi-0QEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c', 'scsi-SQEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.825990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34', 'scsi-SQEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5', 'dm-uuid-LVM-ntkdiD7zsbM03QLUVmvmszSkPpDq2T3WNBLwRo0cmvTQbmNZXYXSsFfmJNZl8Ng2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791', 'dm-uuid-LVM-mlP7WRc7Ld5D4hI6Q71tFUCAmKO8L6bM0SFe3GwFSt363EKsLiZu4Xr2Fcm4SqAg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PK1oQc-PafE-SXTJ-gC8J-TqLc-GIC3-HqLAAe', 'scsi-0QEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b', 'scsi-SQEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Exi2ct-gAdU-6Qq1-Ctrc-d3jT-eYnt-ALlvmg', 'scsi-0QEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95', 'scsi-SQEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d', 'scsi-SQEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826125 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826150 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826167 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BtT3it-8JPO-VgWx-exfl-04Wt-TZQB-eSuhRn', 'scsi-0QEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9', 'scsi-SQEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IUdtJC-z0Mo-rn1o-MAmW-S78C-2oty-9gBk4d', 'scsi-0QEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b', 'scsi-SQEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598', 'scsi-SQEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:01:31.826210 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826215 | orchestrator | 2025-08-29 15:01:31.826219 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:01:31.826224 | orchestrator | Friday 29 August 2025 14:50:26 +0000 (0:00:02.054) 0:00:36.721 ********* 2025-08-29 15:01:31.826228 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.826233 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.826238 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.826244 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.826249 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.826253 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.826258 | orchestrator | 2025-08-29 15:01:31.826262 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:01:31.826266 | orchestrator | Friday 29 August 2025 14:50:28 +0000 (0:00:02.087) 0:00:38.808 ********* 2025-08-29 15:01:31.826271 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.826275 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.826279 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.826284 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.826288 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.826292 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.826296 | orchestrator | 2025-08-29 15:01:31.826301 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:01:31.826306 | orchestrator | Friday 29 August 2025 14:50:29 +0000 (0:00:00.922) 0:00:39.731 ********* 2025-08-29 15:01:31.826310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.826365 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.826379 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.826383 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826388 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826392 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826396 | orchestrator | 2025-08-29 15:01:31.826401 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:01:31.826405 | orchestrator | Friday 29 August 2025 14:50:30 +0000 (0:00:01.630) 0:00:41.362 ********* 2025-08-29 15:01:31.826410 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.826414 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.826418 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.826423 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826430 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826434 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826439 | orchestrator | 2025-08-29 15:01:31.826443 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:01:31.826447 | orchestrator | Friday 29 August 2025 14:50:31 +0000 (0:00:00.984) 0:00:42.347 ********* 2025-08-29 15:01:31.826452 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.826456 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.826460 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.826465 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826469 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826473 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826477 | orchestrator | 2025-08-29 15:01:31.826481 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:01:31.826486 | orchestrator | Friday 29 August 2025 14:50:32 +0000 (0:00:00.886) 0:00:43.233 ********* 2025-08-29 15:01:31.826490 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.826494 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.826498 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.826503 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826507 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826511 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826515 | orchestrator | 2025-08-29 15:01:31.826520 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:01:31.826524 | orchestrator | Friday 29 August 2025 14:50:33 +0000 (0:00:01.456) 0:00:44.690 ********* 2025-08-29 15:01:31.826529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:31.826533 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 15:01:31.826537 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 15:01:31.826542 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 15:01:31.826547 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:01:31.826552 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 15:01:31.826557 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:01:31.826562 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:01:31.826567 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 15:01:31.826572 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:01:31.826576 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:01:31.826581 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:01:31.826586 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 15:01:31.826591 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:01:31.826595 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:01:31.826600 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:01:31.826605 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:01:31.826610 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:01:31.826615 | orchestrator | 2025-08-29 15:01:31.826620 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:01:31.826625 | orchestrator | Friday 29 August 2025 14:50:38 +0000 (0:00:04.605) 0:00:49.296 ********* 2025-08-29 15:01:31.826630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:01:31.826635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:01:31.826640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:01:31.826645 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.826650 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 15:01:31.826655 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 15:01:31.826660 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 15:01:31.826668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 15:01:31.826673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 15:01:31.826678 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 15:01:31.826683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.826688 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.826697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:01:31.826702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:01:31.826706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:01:31.826710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:01:31.826714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:01:31.826719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:01:31.826723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826727 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826734 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:01:31.826739 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:01:31.826743 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:01:31.826747 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826752 | orchestrator | 2025-08-29 15:01:31.826769 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:01:31.826773 | orchestrator | Friday 29 August 2025 14:50:39 +0000 (0:00:00.627) 0:00:49.923 ********* 2025-08-29 15:01:31.826778 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.826782 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.826786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.826791 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.826796 | orchestrator | 2025-08-29 15:01:31.826800 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:01:31.826805 | orchestrator | Friday 29 August 2025 14:50:40 +0000 (0:00:01.343) 0:00:51.266 ********* 2025-08-29 15:01:31.826809 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826814 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826818 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826822 | orchestrator | 2025-08-29 15:01:31.826826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:01:31.826831 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.518) 0:00:51.785 ********* 2025-08-29 15:01:31.826835 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826839 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826875 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826879 | orchestrator | 2025-08-29 15:01:31.826884 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:01:31.826888 | orchestrator | Friday 29 August 2025 14:50:41 +0000 (0:00:00.411) 0:00:52.197 ********* 2025-08-29 15:01:31.826893 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826897 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.826901 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.826906 | orchestrator | 2025-08-29 15:01:31.826911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:01:31.826918 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.629) 0:00:52.826 ********* 2025-08-29 15:01:31.826924 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.826931 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.826939 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.826943 | orchestrator | 2025-08-29 15:01:31.826948 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:01:31.826957 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:00.576) 0:00:53.402 ********* 2025-08-29 15:01:31.826962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.826966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.826970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.826975 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.826979 | orchestrator | 2025-08-29 15:01:31.826984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:01:31.826988 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.378) 0:00:53.780 ********* 2025-08-29 15:01:31.826992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.826997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.827001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.827005 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827010 | orchestrator | 2025-08-29 15:01:31.827014 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:01:31.827018 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.411) 0:00:54.192 ********* 2025-08-29 15:01:31.827023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.827027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.827031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.827036 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827040 | orchestrator | 2025-08-29 15:01:31.827044 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:01:31.827049 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:00.431) 0:00:54.624 ********* 2025-08-29 15:01:31.827053 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827057 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827062 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827066 | orchestrator | 2025-08-29 15:01:31.827070 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:01:31.827075 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.435) 0:00:55.059 ********* 2025-08-29 15:01:31.827079 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:01:31.827084 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:01:31.827088 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:01:31.827092 | orchestrator | 2025-08-29 15:01:31.827097 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:01:31.827101 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:01.009) 0:00:56.068 ********* 2025-08-29 15:01:31.827109 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:31.827113 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:01:31.827118 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:01:31.827122 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 15:01:31.827126 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:01:31.827134 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:01:31.827139 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:01:31.827143 | orchestrator | 2025-08-29 15:01:31.827147 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:01:31.827152 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.849) 0:00:56.918 ********* 2025-08-29 15:01:31.827156 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:31.827161 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:01:31.827165 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:01:31.827173 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 15:01:31.827178 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:01:31.827182 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:01:31.827186 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:01:31.827191 | orchestrator | 2025-08-29 15:01:31.827195 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.827200 | orchestrator | Friday 29 August 2025 14:50:48 +0000 (0:00:02.424) 0:00:59.342 ********* 2025-08-29 15:01:31.827204 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.827210 | orchestrator | 2025-08-29 15:01:31.827214 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.827218 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:01.954) 0:01:01.296 ********* 2025-08-29 15:01:31.827223 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.827227 | orchestrator | 2025-08-29 15:01:31.827232 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.827236 | orchestrator | Friday 29 August 2025 14:50:52 +0000 (0:00:02.003) 0:01:03.299 ********* 2025-08-29 15:01:31.827240 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.827245 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.827249 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827258 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827262 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.827267 | orchestrator | 2025-08-29 15:01:31.827271 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.827275 | orchestrator | Friday 29 August 2025 14:50:53 +0000 (0:00:00.896) 0:01:04.196 ********* 2025-08-29 15:01:31.827280 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827284 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827293 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827297 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827302 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827306 | orchestrator | 2025-08-29 15:01:31.827310 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.827315 | orchestrator | Friday 29 August 2025 14:50:54 +0000 (0:00:01.133) 0:01:05.329 ********* 2025-08-29 15:01:31.827319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827323 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827328 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827332 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827336 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827341 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827345 | orchestrator | 2025-08-29 15:01:31.827349 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.827354 | orchestrator | Friday 29 August 2025 14:50:56 +0000 (0:00:01.440) 0:01:06.770 ********* 2025-08-29 15:01:31.827358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827362 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827367 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827371 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827375 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827380 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827384 | orchestrator | 2025-08-29 15:01:31.827389 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.827397 | orchestrator | Friday 29 August 2025 14:50:57 +0000 (0:00:01.294) 0:01:08.065 ********* 2025-08-29 15:01:31.827401 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.827407 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.827414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827420 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827427 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.827433 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827439 | orchestrator | 2025-08-29 15:01:31.827448 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.827454 | orchestrator | Friday 29 August 2025 14:50:58 +0000 (0:00:01.228) 0:01:09.293 ********* 2025-08-29 15:01:31.827463 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827470 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827476 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827482 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827487 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827494 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827500 | orchestrator | 2025-08-29 15:01:31.827507 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.827513 | orchestrator | Friday 29 August 2025 14:50:59 +0000 (0:00:00.755) 0:01:10.049 ********* 2025-08-29 15:01:31.827520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827526 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827543 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827549 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827555 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827561 | orchestrator | 2025-08-29 15:01:31.827568 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.827574 | orchestrator | Friday 29 August 2025 14:51:00 +0000 (0:00:01.389) 0:01:11.438 ********* 2025-08-29 15:01:31.827580 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.827587 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.827593 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.827599 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827606 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827612 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827619 | orchestrator | 2025-08-29 15:01:31.827626 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.827632 | orchestrator | Friday 29 August 2025 14:51:02 +0000 (0:00:01.624) 0:01:13.063 ********* 2025-08-29 15:01:31.827638 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.827645 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.827651 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.827658 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827665 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827671 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827678 | orchestrator | 2025-08-29 15:01:31.827686 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.827690 | orchestrator | Friday 29 August 2025 14:51:03 +0000 (0:00:01.540) 0:01:14.604 ********* 2025-08-29 15:01:31.827695 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827699 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827703 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827708 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827712 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827716 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827720 | orchestrator | 2025-08-29 15:01:31.827725 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.827729 | orchestrator | Friday 29 August 2025 14:51:05 +0000 (0:00:01.146) 0:01:15.750 ********* 2025-08-29 15:01:31.827734 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.827738 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.827747 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.827751 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827790 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827795 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827799 | orchestrator | 2025-08-29 15:01:31.827803 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.827808 | orchestrator | Friday 29 August 2025 14:51:06 +0000 (0:00:01.220) 0:01:16.970 ********* 2025-08-29 15:01:31.827812 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827821 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827825 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827829 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827833 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827838 | orchestrator | 2025-08-29 15:01:31.827842 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.827846 | orchestrator | Friday 29 August 2025 14:51:07 +0000 (0:00:00.755) 0:01:17.726 ********* 2025-08-29 15:01:31.827850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827855 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827859 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827863 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827867 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827872 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827876 | orchestrator | 2025-08-29 15:01:31.827880 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.827885 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:01.045) 0:01:18.772 ********* 2025-08-29 15:01:31.827889 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827893 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827897 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827902 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.827906 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.827910 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.827915 | orchestrator | 2025-08-29 15:01:31.827919 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.827923 | orchestrator | Friday 29 August 2025 14:51:08 +0000 (0:00:00.701) 0:01:19.473 ********* 2025-08-29 15:01:31.827927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827936 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827940 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827949 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827953 | orchestrator | 2025-08-29 15:01:31.827957 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.827961 | orchestrator | Friday 29 August 2025 14:51:09 +0000 (0:00:00.819) 0:01:20.293 ********* 2025-08-29 15:01:31.827966 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.827970 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.827974 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.827978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.827983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.827987 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.827991 | orchestrator | 2025-08-29 15:01:31.827995 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.828003 | orchestrator | Friday 29 August 2025 14:51:10 +0000 (0:00:00.554) 0:01:20.847 ********* 2025-08-29 15:01:31.828008 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.828012 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.828017 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.828021 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828025 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828030 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828038 | orchestrator | 2025-08-29 15:01:31.828042 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.828068 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.864) 0:01:21.712 ********* 2025-08-29 15:01:31.828073 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.828081 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.828085 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.828090 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.828094 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.828098 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.828103 | orchestrator | 2025-08-29 15:01:31.828108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.828115 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.628) 0:01:22.341 ********* 2025-08-29 15:01:31.828122 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.828127 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.828131 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.828135 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.828139 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.828144 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.828148 | orchestrator | 2025-08-29 15:01:31.828153 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 15:01:31.828157 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:01.201) 0:01:23.542 ********* 2025-08-29 15:01:31.828161 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.828166 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.828170 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.828174 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.828179 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.828183 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.828187 | orchestrator | 2025-08-29 15:01:31.828192 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 15:01:31.828196 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:01.475) 0:01:25.017 ********* 2025-08-29 15:01:31.828201 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.828205 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.828209 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.828214 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.828218 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.828222 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.828226 | orchestrator | 2025-08-29 15:01:31.828231 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 15:01:31.828235 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:01.831) 0:01:26.849 ********* 2025-08-29 15:01:31.828240 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.828244 | orchestrator | 2025-08-29 15:01:31.828249 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 15:01:31.828253 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:01.037) 0:01:27.886 ********* 2025-08-29 15:01:31.828258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828266 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828270 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828274 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828282 | orchestrator | 2025-08-29 15:01:31.828286 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 15:01:31.828290 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:00.655) 0:01:28.542 ********* 2025-08-29 15:01:31.828293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828297 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828301 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828311 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828315 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828319 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828323 | orchestrator | 2025-08-29 15:01:31.828327 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 15:01:31.828331 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:00.549) 0:01:29.091 ********* 2025-08-29 15:01:31.828335 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:01:31.828339 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:01:31.828343 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:01:31.828347 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:01:31.828351 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:01:31.828355 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:01:31.828359 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:01:31.828363 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:01:31.828367 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:01:31.828371 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:01:31.828375 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:01:31.828379 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:01:31.828383 | orchestrator | 2025-08-29 15:01:31.828389 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 15:01:31.828393 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:01.443) 0:01:30.535 ********* 2025-08-29 15:01:31.828397 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.828401 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.828405 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.828409 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.828413 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.828417 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.828421 | orchestrator | 2025-08-29 15:01:31.828425 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 15:01:31.828432 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:00.855) 0:01:31.390 ********* 2025-08-29 15:01:31.828436 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828440 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828448 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828452 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828456 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828459 | orchestrator | 2025-08-29 15:01:31.828463 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 15:01:31.828467 | orchestrator | Friday 29 August 2025 14:51:21 +0000 (0:00:00.703) 0:01:32.094 ********* 2025-08-29 15:01:31.828471 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828479 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828483 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828487 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828491 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828495 | orchestrator | 2025-08-29 15:01:31.828499 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 15:01:31.828503 | orchestrator | Friday 29 August 2025 14:51:21 +0000 (0:00:00.542) 0:01:32.636 ********* 2025-08-29 15:01:31.828507 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828516 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828520 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828524 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828528 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828532 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828536 | orchestrator | 2025-08-29 15:01:31.828540 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 15:01:31.828544 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:00.665) 0:01:33.301 ********* 2025-08-29 15:01:31.828548 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.828552 | orchestrator | 2025-08-29 15:01:31.828556 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 15:01:31.828560 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:00.988) 0:01:34.290 ********* 2025-08-29 15:01:31.828564 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.828568 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.828572 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.828576 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.828580 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.828584 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.828588 | orchestrator | 2025-08-29 15:01:31.828592 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 15:01:31.828596 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:01:11.755) 0:02:46.046 ********* 2025-08-29 15:01:31.828600 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:01:31.828604 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:01:31.828608 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:01:31.828612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828616 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:01:31.828620 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:01:31.828624 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:01:31.828628 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828632 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:01:31.828636 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:01:31.828640 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:01:31.828643 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828648 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:01:31.828651 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:01:31.828655 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:01:31.828659 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828663 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:01:31.828667 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:01:31.828671 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:01:31.828675 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828679 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:01:31.828683 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:01:31.828687 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:01:31.828693 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828701 | orchestrator | 2025-08-29 15:01:31.828705 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 15:01:31.828709 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:01.087) 0:02:47.133 ********* 2025-08-29 15:01:31.828713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828717 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828725 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828729 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828733 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828737 | orchestrator | 2025-08-29 15:01:31.828743 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 15:01:31.828747 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.595) 0:02:47.728 ********* 2025-08-29 15:01:31.828751 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828766 | orchestrator | 2025-08-29 15:01:31.828770 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 15:01:31.828774 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.151) 0:02:47.880 ********* 2025-08-29 15:01:31.828778 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828782 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828790 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828794 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828798 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828802 | orchestrator | 2025-08-29 15:01:31.828806 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 15:01:31.828810 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.942) 0:02:48.823 ********* 2025-08-29 15:01:31.828814 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828818 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828826 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828830 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828834 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828838 | orchestrator | 2025-08-29 15:01:31.828842 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 15:01:31.828846 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.617) 0:02:49.441 ********* 2025-08-29 15:01:31.828850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828854 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828858 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828862 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828866 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828870 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.828874 | orchestrator | 2025-08-29 15:01:31.828878 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 15:01:31.828882 | orchestrator | Friday 29 August 2025 14:52:39 +0000 (0:00:00.816) 0:02:50.258 ********* 2025-08-29 15:01:31.828886 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.828890 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.828894 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.828898 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.828903 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.828907 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.828910 | orchestrator | 2025-08-29 15:01:31.828915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 15:01:31.828919 | orchestrator | Friday 29 August 2025 14:52:41 +0000 (0:00:02.048) 0:02:52.307 ********* 2025-08-29 15:01:31.828923 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.828927 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.828931 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.828934 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.828938 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.828945 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.828949 | orchestrator | 2025-08-29 15:01:31.828954 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 15:01:31.828958 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:00.860) 0:02:53.167 ********* 2025-08-29 15:01:31.828962 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.828967 | orchestrator | 2025-08-29 15:01:31.828971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 15:01:31.828975 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:01.222) 0:02:54.390 ********* 2025-08-29 15:01:31.828979 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.828983 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.828987 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.828991 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.828995 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.828999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829003 | orchestrator | 2025-08-29 15:01:31.829007 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 15:01:31.829011 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:00.707) 0:02:55.098 ********* 2025-08-29 15:01:31.829015 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829019 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829023 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829027 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829031 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829035 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829039 | orchestrator | 2025-08-29 15:01:31.829043 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 15:01:31.829047 | orchestrator | Friday 29 August 2025 14:52:45 +0000 (0:00:01.009) 0:02:56.107 ********* 2025-08-29 15:01:31.829051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829055 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829063 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829067 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829071 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829075 | orchestrator | 2025-08-29 15:01:31.829079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 15:01:31.829086 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:00.875) 0:02:56.983 ********* 2025-08-29 15:01:31.829090 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829102 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829106 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829110 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829113 | orchestrator | 2025-08-29 15:01:31.829118 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 15:01:31.829124 | orchestrator | Friday 29 August 2025 14:52:47 +0000 (0:00:00.872) 0:02:57.855 ********* 2025-08-29 15:01:31.829128 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829132 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829136 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829140 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829144 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829148 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829152 | orchestrator | 2025-08-29 15:01:31.829156 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 15:01:31.829160 | orchestrator | Friday 29 August 2025 14:52:47 +0000 (0:00:00.539) 0:02:58.395 ********* 2025-08-29 15:01:31.829164 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829174 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829178 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829182 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829186 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829190 | orchestrator | 2025-08-29 15:01:31.829194 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 15:01:31.829198 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:00.713) 0:02:59.109 ********* 2025-08-29 15:01:31.829202 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829206 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829214 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829218 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829222 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829226 | orchestrator | 2025-08-29 15:01:31.829230 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 15:01:31.829234 | orchestrator | Friday 29 August 2025 14:52:48 +0000 (0:00:00.493) 0:02:59.602 ********* 2025-08-29 15:01:31.829238 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829242 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829250 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829258 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829262 | orchestrator | 2025-08-29 15:01:31.829266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 15:01:31.829270 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:00.594) 0:03:00.197 ********* 2025-08-29 15:01:31.829274 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.829278 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.829282 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.829286 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.829290 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.829294 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.829298 | orchestrator | 2025-08-29 15:01:31.829302 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 15:01:31.829306 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:01.133) 0:03:01.331 ********* 2025-08-29 15:01:31.829310 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.829314 | orchestrator | 2025-08-29 15:01:31.829318 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 15:01:31.829322 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:01.185) 0:03:02.516 ********* 2025-08-29 15:01:31.829326 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 15:01:31.829331 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 15:01:31.829335 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 15:01:31.829339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 15:01:31.829343 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 15:01:31.829347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 15:01:31.829351 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 15:01:31.829355 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 15:01:31.829359 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 15:01:31.829363 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 15:01:31.829367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 15:01:31.829371 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 15:01:31.829375 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 15:01:31.829382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 15:01:31.829386 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 15:01:31.829390 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 15:01:31.829394 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 15:01:31.829398 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 15:01:31.829402 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 15:01:31.829406 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 15:01:31.829410 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 15:01:31.829416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 15:01:31.829421 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 15:01:31.829425 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 15:01:31.829429 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 15:01:31.829433 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 15:01:31.829437 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 15:01:31.829440 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 15:01:31.829444 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 15:01:31.829450 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 15:01:31.829454 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 15:01:31.829458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 15:01:31.829462 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 15:01:31.829466 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 15:01:31.829470 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 15:01:31.829474 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 15:01:31.829478 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 15:01:31.829482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 15:01:31.829486 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 15:01:31.829490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 15:01:31.829494 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:01:31.829498 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:01:31.829502 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:01:31.829506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 15:01:31.829510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 15:01:31.829514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:01:31.829519 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:01:31.829523 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:01:31.829527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:01:31.829530 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:01:31.829534 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:01:31.829538 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:01:31.829542 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:01:31.829546 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:01:31.829550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:01:31.829557 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:01:31.829561 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:01:31.829566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:01:31.829570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:01:31.829574 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:01:31.829578 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:01:31.829582 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:01:31.829586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:01:31.829590 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:01:31.829594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:01:31.829598 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:01:31.829602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:01:31.829606 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:01:31.829610 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:01:31.829614 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:01:31.829618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:01:31.829622 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:01:31.829626 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:01:31.829630 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:01:31.829634 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:01:31.829638 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:01:31.829642 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:01:31.829646 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:01:31.829650 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:01:31.829656 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:01:31.829660 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:01:31.829664 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:01:31.829668 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 15:01:31.829672 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 15:01:31.829676 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 15:01:31.829680 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 15:01:31.829686 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:01:31.829690 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:01:31.829694 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 15:01:31.829698 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 15:01:31.829702 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 15:01:31.829706 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 15:01:31.829710 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 15:01:31.829714 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 15:01:31.829718 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 15:01:31.829722 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 15:01:31.829729 | orchestrator | 2025-08-29 15:01:31.829733 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 15:01:31.829737 | orchestrator | Friday 29 August 2025 14:52:58 +0000 (0:00:06.710) 0:03:09.227 ********* 2025-08-29 15:01:31.829741 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829745 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829749 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829766 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.829770 | orchestrator | 2025-08-29 15:01:31.829775 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 15:01:31.829779 | orchestrator | Friday 29 August 2025 14:53:00 +0000 (0:00:01.643) 0:03:10.871 ********* 2025-08-29 15:01:31.829783 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.829787 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.829791 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.829795 | orchestrator | 2025-08-29 15:01:31.829799 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 15:01:31.829804 | orchestrator | Friday 29 August 2025 14:53:01 +0000 (0:00:00.945) 0:03:11.816 ********* 2025-08-29 15:01:31.829808 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.829812 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.829816 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.829820 | orchestrator | 2025-08-29 15:01:31.829824 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 15:01:31.829828 | orchestrator | Friday 29 August 2025 14:53:02 +0000 (0:00:01.613) 0:03:13.429 ********* 2025-08-29 15:01:31.829832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829836 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829840 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829844 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.829848 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.829852 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.829856 | orchestrator | 2025-08-29 15:01:31.829860 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 15:01:31.829864 | orchestrator | Friday 29 August 2025 14:53:03 +0000 (0:00:00.978) 0:03:14.408 ********* 2025-08-29 15:01:31.829868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829872 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829876 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829880 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.829884 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.829888 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.829892 | orchestrator | 2025-08-29 15:01:31.829897 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 15:01:31.829901 | orchestrator | Friday 29 August 2025 14:53:04 +0000 (0:00:00.908) 0:03:15.317 ********* 2025-08-29 15:01:31.829905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829909 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.829913 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.829917 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.829921 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.829925 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.829932 | orchestrator | 2025-08-29 15:01:31.829936 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 15:01:31.829940 | orchestrator | Friday 29 August 2025 14:53:05 +0000 (0:00:01.336) 0:03:16.653 ********* 2025-08-29 15:01:31.829944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.829948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830504 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830515 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830520 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830524 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830528 | orchestrator | 2025-08-29 15:01:31.830532 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 15:01:31.830537 | orchestrator | Friday 29 August 2025 14:53:06 +0000 (0:00:00.717) 0:03:17.371 ********* 2025-08-29 15:01:31.830541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830545 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830549 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830553 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830561 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830566 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830569 | orchestrator | 2025-08-29 15:01:31.830573 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 15:01:31.830578 | orchestrator | Friday 29 August 2025 14:53:07 +0000 (0:00:00.836) 0:03:18.207 ********* 2025-08-29 15:01:31.830582 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830590 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830594 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830598 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830602 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830606 | orchestrator | 2025-08-29 15:01:31.830610 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 15:01:31.830614 | orchestrator | Friday 29 August 2025 14:53:08 +0000 (0:00:00.799) 0:03:19.007 ********* 2025-08-29 15:01:31.830618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830622 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830626 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830629 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830633 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830641 | orchestrator | 2025-08-29 15:01:31.830645 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 15:01:31.830649 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.817) 0:03:19.825 ********* 2025-08-29 15:01:31.830653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830657 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830661 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830665 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830669 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830673 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830677 | orchestrator | 2025-08-29 15:01:31.830681 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 15:01:31.830685 | orchestrator | Friday 29 August 2025 14:53:09 +0000 (0:00:00.779) 0:03:20.604 ********* 2025-08-29 15:01:31.830689 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830693 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830697 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830701 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.830705 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.830709 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.830713 | orchestrator | 2025-08-29 15:01:31.830717 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 15:01:31.830732 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:03.389) 0:03:23.994 ********* 2025-08-29 15:01:31.830737 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830741 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830749 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.830772 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.830776 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.830780 | orchestrator | 2025-08-29 15:01:31.830784 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 15:01:31.830787 | orchestrator | Friday 29 August 2025 14:53:13 +0000 (0:00:00.670) 0:03:24.665 ********* 2025-08-29 15:01:31.830791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830795 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830799 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830802 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.830806 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.830810 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.830813 | orchestrator | 2025-08-29 15:01:31.830817 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 15:01:31.830821 | orchestrator | Friday 29 August 2025 14:53:15 +0000 (0:00:01.136) 0:03:25.801 ********* 2025-08-29 15:01:31.830825 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830836 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830839 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830843 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830847 | orchestrator | 2025-08-29 15:01:31.830850 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 15:01:31.830870 | orchestrator | Friday 29 August 2025 14:53:15 +0000 (0:00:00.827) 0:03:26.629 ********* 2025-08-29 15:01:31.830873 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830877 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830881 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830885 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.830888 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.830892 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.830896 | orchestrator | 2025-08-29 15:01:31.830900 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 15:01:31.830911 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:01.133) 0:03:27.762 ********* 2025-08-29 15:01:31.830916 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830923 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830931 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 15:01:31.830937 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 15:01:31.830942 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830946 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 15:01:31.830954 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 15:01:31.830958 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.830962 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 15:01:31.830966 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 15:01:31.830970 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.830973 | orchestrator | 2025-08-29 15:01:31.830977 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 15:01:31.830981 | orchestrator | Friday 29 August 2025 14:53:17 +0000 (0:00:00.793) 0:03:28.556 ********* 2025-08-29 15:01:31.830984 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.830988 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.830992 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.830995 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.830999 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831003 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831006 | orchestrator | 2025-08-29 15:01:31.831010 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 15:01:31.831014 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:00.761) 0:03:29.317 ********* 2025-08-29 15:01:31.831018 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831021 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831025 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831029 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831032 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831040 | orchestrator | 2025-08-29 15:01:31.831044 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:01:31.831047 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.502) 0:03:29.820 ********* 2025-08-29 15:01:31.831051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831055 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831058 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831062 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831069 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831073 | orchestrator | 2025-08-29 15:01:31.831077 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:01:31.831080 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.601) 0:03:30.421 ********* 2025-08-29 15:01:31.831084 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831088 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831091 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831095 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831099 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831102 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831106 | orchestrator | 2025-08-29 15:01:31.831110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:01:31.831117 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.514) 0:03:30.935 ********* 2025-08-29 15:01:31.831122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831126 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831130 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831137 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831150 | orchestrator | 2025-08-29 15:01:31.831155 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:01:31.831159 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.731) 0:03:31.667 ********* 2025-08-29 15:01:31.831163 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831168 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831172 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831177 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.831181 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.831185 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.831190 | orchestrator | 2025-08-29 15:01:31.831196 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:01:31.831201 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.766) 0:03:32.434 ********* 2025-08-29 15:01:31.831205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 15:01:31.831210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 15:01:31.831214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 15:01:31.831219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831223 | orchestrator | 2025-08-29 15:01:31.831227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:01:31.831232 | orchestrator | Friday 29 August 2025 14:53:22 +0000 (0:00:00.766) 0:03:33.201 ********* 2025-08-29 15:01:31.831236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 15:01:31.831240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 15:01:31.831245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 15:01:31.831249 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831254 | orchestrator | 2025-08-29 15:01:31.831258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:01:31.831262 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:00.614) 0:03:33.815 ********* 2025-08-29 15:01:31.831266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 15:01:31.831271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 15:01:31.831275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 15:01:31.831279 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831283 | orchestrator | 2025-08-29 15:01:31.831286 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:01:31.831290 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:00.676) 0:03:34.492 ********* 2025-08-29 15:01:31.831294 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831297 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831301 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831305 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.831309 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.831312 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.831316 | orchestrator | 2025-08-29 15:01:31.831320 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:01:31.831324 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.635) 0:03:35.127 ********* 2025-08-29 15:01:31.831328 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 15:01:31.831331 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831335 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 15:01:31.831342 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831345 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:01:31.831349 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:01:31.831353 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 15:01:31.831356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831360 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:01:31.831364 | orchestrator | 2025-08-29 15:01:31.831368 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 15:01:31.831372 | orchestrator | Friday 29 August 2025 14:53:27 +0000 (0:00:02.950) 0:03:38.078 ********* 2025-08-29 15:01:31.831375 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.831379 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.831383 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.831386 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.831390 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.831394 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.831398 | orchestrator | 2025-08-29 15:01:31.831401 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:01:31.831405 | orchestrator | Friday 29 August 2025 14:53:31 +0000 (0:00:04.623) 0:03:42.702 ********* 2025-08-29 15:01:31.831409 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.831413 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.831416 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.831420 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.831424 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.831427 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.831431 | orchestrator | 2025-08-29 15:01:31.831435 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:01:31.831439 | orchestrator | Friday 29 August 2025 14:53:34 +0000 (0:00:02.100) 0:03:44.803 ********* 2025-08-29 15:01:31.831442 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831446 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831450 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831454 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-08-29 15:01:31.831458 | orchestrator | 2025-08-29 15:01:31.831462 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:01:31.831465 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:01.348) 0:03:46.151 ********* 2025-08-29 15:01:31.831469 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.831473 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.831477 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.831480 | orchestrator | 2025-08-29 15:01:31.831484 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:01:31.831490 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:00.365) 0:03:46.517 ********* 2025-08-29 15:01:31.831495 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.831498 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.831502 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.831506 | orchestrator | 2025-08-29 15:01:31.831510 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:01:31.831514 | orchestrator | Friday 29 August 2025 14:53:37 +0000 (0:00:01.646) 0:03:48.164 ********* 2025-08-29 15:01:31.831517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:01:31.831524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:01:31.831527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:01:31.831531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831535 | orchestrator | 2025-08-29 15:01:31.831539 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:01:31.831542 | orchestrator | Friday 29 August 2025 14:53:38 +0000 (0:00:00.860) 0:03:49.024 ********* 2025-08-29 15:01:31.831549 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.831553 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.831556 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.831560 | orchestrator | 2025-08-29 15:01:31.831564 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:01:31.831568 | orchestrator | Friday 29 August 2025 14:53:38 +0000 (0:00:00.526) 0:03:49.551 ********* 2025-08-29 15:01:31.831571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831575 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831579 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831583 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.831587 | orchestrator | 2025-08-29 15:01:31.831590 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:01:31.831594 | orchestrator | Friday 29 August 2025 14:53:39 +0000 (0:00:00.888) 0:03:50.440 ********* 2025-08-29 15:01:31.831598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.831602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.831605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.831609 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831613 | orchestrator | 2025-08-29 15:01:31.831617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:01:31.831620 | orchestrator | Friday 29 August 2025 14:53:40 +0000 (0:00:00.529) 0:03:50.970 ********* 2025-08-29 15:01:31.831624 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831628 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831631 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831635 | orchestrator | 2025-08-29 15:01:31.831639 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:01:31.831643 | orchestrator | Friday 29 August 2025 14:53:40 +0000 (0:00:00.630) 0:03:51.600 ********* 2025-08-29 15:01:31.831646 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831650 | orchestrator | 2025-08-29 15:01:31.831654 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:01:31.831658 | orchestrator | Friday 29 August 2025 14:53:41 +0000 (0:00:00.374) 0:03:51.975 ********* 2025-08-29 15:01:31.831661 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831665 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831669 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831673 | orchestrator | 2025-08-29 15:01:31.831676 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:01:31.831680 | orchestrator | Friday 29 August 2025 14:53:41 +0000 (0:00:00.467) 0:03:52.443 ********* 2025-08-29 15:01:31.831684 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831688 | orchestrator | 2025-08-29 15:01:31.831691 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:01:31.831695 | orchestrator | Friday 29 August 2025 14:53:41 +0000 (0:00:00.203) 0:03:52.646 ********* 2025-08-29 15:01:31.831699 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831703 | orchestrator | 2025-08-29 15:01:31.831706 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:01:31.831710 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:00.200) 0:03:52.846 ********* 2025-08-29 15:01:31.831714 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831718 | orchestrator | 2025-08-29 15:01:31.831721 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:01:31.831725 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:00.113) 0:03:52.960 ********* 2025-08-29 15:01:31.831729 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831733 | orchestrator | 2025-08-29 15:01:31.831736 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:01:31.831740 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:00.211) 0:03:53.172 ********* 2025-08-29 15:01:31.831747 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831751 | orchestrator | 2025-08-29 15:01:31.831768 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:01:31.831772 | orchestrator | Friday 29 August 2025 14:53:42 +0000 (0:00:00.210) 0:03:53.382 ********* 2025-08-29 15:01:31.831776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.831780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.831784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.831787 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831791 | orchestrator | 2025-08-29 15:01:31.831795 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:01:31.831799 | orchestrator | Friday 29 August 2025 14:53:43 +0000 (0:00:00.540) 0:03:53.923 ********* 2025-08-29 15:01:31.831802 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831806 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.831810 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.831814 | orchestrator | 2025-08-29 15:01:31.831820 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:01:31.831824 | orchestrator | Friday 29 August 2025 14:53:43 +0000 (0:00:00.690) 0:03:54.613 ********* 2025-08-29 15:01:31.831828 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831831 | orchestrator | 2025-08-29 15:01:31.831835 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:01:31.831839 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:00.197) 0:03:54.810 ********* 2025-08-29 15:01:31.831843 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831846 | orchestrator | 2025-08-29 15:01:31.831850 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:01:31.831856 | orchestrator | Friday 29 August 2025 14:53:44 +0000 (0:00:00.210) 0:03:55.021 ********* 2025-08-29 15:01:31.831860 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831864 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.831875 | orchestrator | 2025-08-29 15:01:31.831879 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:01:31.831883 | orchestrator | Friday 29 August 2025 14:53:45 +0000 (0:00:01.112) 0:03:56.134 ********* 2025-08-29 15:01:31.831887 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.831890 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.831894 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.831898 | orchestrator | 2025-08-29 15:01:31.831902 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:01:31.831906 | orchestrator | Friday 29 August 2025 14:53:46 +0000 (0:00:00.822) 0:03:56.957 ********* 2025-08-29 15:01:31.831909 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.831913 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.831917 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.831921 | orchestrator | 2025-08-29 15:01:31.831924 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:01:31.831928 | orchestrator | Friday 29 August 2025 14:53:47 +0000 (0:00:01.284) 0:03:58.241 ********* 2025-08-29 15:01:31.831932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.831936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.831939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.831943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.831947 | orchestrator | 2025-08-29 15:01:31.831951 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:01:31.831954 | orchestrator | Friday 29 August 2025 14:53:48 +0000 (0:00:01.025) 0:03:59.266 ********* 2025-08-29 15:01:31.831961 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.831965 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.831969 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.831972 | orchestrator | 2025-08-29 15:01:31.831976 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:01:31.831980 | orchestrator | Friday 29 August 2025 14:53:48 +0000 (0:00:00.424) 0:03:59.691 ********* 2025-08-29 15:01:31.831984 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.831988 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.831991 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.831995 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.831999 | orchestrator | 2025-08-29 15:01:31.832003 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:01:31.832006 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:01.335) 0:04:01.026 ********* 2025-08-29 15:01:31.832010 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.832014 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.832018 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.832021 | orchestrator | 2025-08-29 15:01:31.832025 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:01:31.832029 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:00.322) 0:04:01.348 ********* 2025-08-29 15:01:31.832033 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.832036 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.832040 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.832044 | orchestrator | 2025-08-29 15:01:31.832048 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:01:31.832051 | orchestrator | Friday 29 August 2025 14:53:52 +0000 (0:00:01.871) 0:04:03.220 ********* 2025-08-29 15:01:31.832055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.832059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.832063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.832067 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.832070 | orchestrator | 2025-08-29 15:01:31.832074 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:01:31.832078 | orchestrator | Friday 29 August 2025 14:53:53 +0000 (0:00:00.664) 0:04:03.884 ********* 2025-08-29 15:01:31.832082 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.832085 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.832089 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.832093 | orchestrator | 2025-08-29 15:01:31.832097 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 15:01:31.832101 | orchestrator | Friday 29 August 2025 14:53:53 +0000 (0:00:00.384) 0:04:04.268 ********* 2025-08-29 15:01:31.832104 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832108 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832112 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.832120 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.832123 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.832127 | orchestrator | 2025-08-29 15:01:31.832131 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:01:31.832135 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:00.855) 0:04:05.124 ********* 2025-08-29 15:01:31.832141 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.832145 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.832149 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.832153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.832157 | orchestrator | 2025-08-29 15:01:31.832161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:01:31.832168 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.798) 0:04:05.922 ********* 2025-08-29 15:01:31.832172 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832176 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832182 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832186 | orchestrator | 2025-08-29 15:01:31.832190 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:01:31.832194 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:00.560) 0:04:06.482 ********* 2025-08-29 15:01:31.832198 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.832201 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.832205 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.832209 | orchestrator | 2025-08-29 15:01:31.832213 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:01:31.832216 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:01.265) 0:04:07.748 ********* 2025-08-29 15:01:31.832220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:01:31.832224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:01:31.832228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:01:31.832232 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832235 | orchestrator | 2025-08-29 15:01:31.832239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:01:31.832243 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:00.612) 0:04:08.361 ********* 2025-08-29 15:01:31.832247 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832250 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832254 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832258 | orchestrator | 2025-08-29 15:01:31.832262 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 15:01:31.832266 | orchestrator | 2025-08-29 15:01:31.832269 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.832273 | orchestrator | Friday 29 August 2025 14:53:58 +0000 (0:00:00.594) 0:04:08.955 ********* 2025-08-29 15:01:31.832277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.832281 | orchestrator | 2025-08-29 15:01:31.832285 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.832288 | orchestrator | Friday 29 August 2025 14:53:59 +0000 (0:00:00.792) 0:04:09.747 ********* 2025-08-29 15:01:31.832292 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-08-29 15:01:31.832296 | orchestrator | 2025-08-29 15:01:31.832300 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.832303 | orchestrator | Friday 29 August 2025 14:53:59 +0000 (0:00:00.536) 0:04:10.284 ********* 2025-08-29 15:01:31.832307 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832311 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832315 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832319 | orchestrator | 2025-08-29 15:01:31.832322 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.832326 | orchestrator | Friday 29 August 2025 14:54:00 +0000 (0:00:00.947) 0:04:11.231 ********* 2025-08-29 15:01:31.832330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832334 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832338 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832341 | orchestrator | 2025-08-29 15:01:31.832345 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.832349 | orchestrator | Friday 29 August 2025 14:54:00 +0000 (0:00:00.327) 0:04:11.559 ********* 2025-08-29 15:01:31.832353 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832364 | orchestrator | 2025-08-29 15:01:31.832370 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.832374 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:00.342) 0:04:11.902 ********* 2025-08-29 15:01:31.832378 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832382 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832385 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832389 | orchestrator | 2025-08-29 15:01:31.832393 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.832397 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:00.333) 0:04:12.235 ********* 2025-08-29 15:01:31.832400 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832404 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832408 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832412 | orchestrator | 2025-08-29 15:01:31.832415 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.832419 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:00.943) 0:04:13.179 ********* 2025-08-29 15:01:31.832423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832427 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832430 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832434 | orchestrator | 2025-08-29 15:01:31.832438 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.832442 | orchestrator | Friday 29 August 2025 14:54:02 +0000 (0:00:00.309) 0:04:13.488 ********* 2025-08-29 15:01:31.832445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832449 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832453 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832457 | orchestrator | 2025-08-29 15:01:31.832461 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.832467 | orchestrator | Friday 29 August 2025 14:54:03 +0000 (0:00:00.307) 0:04:13.795 ********* 2025-08-29 15:01:31.832471 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832475 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832478 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832482 | orchestrator | 2025-08-29 15:01:31.832486 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.832490 | orchestrator | Friday 29 August 2025 14:54:03 +0000 (0:00:00.910) 0:04:14.706 ********* 2025-08-29 15:01:31.832493 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832497 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832501 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832505 | orchestrator | 2025-08-29 15:01:31.832511 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.832515 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:01.089) 0:04:15.796 ********* 2025-08-29 15:01:31.832518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832522 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832526 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832530 | orchestrator | 2025-08-29 15:01:31.832534 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.832537 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:00.344) 0:04:16.141 ********* 2025-08-29 15:01:31.832541 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832545 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832549 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832552 | orchestrator | 2025-08-29 15:01:31.832556 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.832560 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:00.495) 0:04:16.636 ********* 2025-08-29 15:01:31.832564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832568 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832571 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832575 | orchestrator | 2025-08-29 15:01:31.832579 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.832588 | orchestrator | Friday 29 August 2025 14:54:06 +0000 (0:00:00.367) 0:04:17.003 ********* 2025-08-29 15:01:31.832591 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832595 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832599 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832603 | orchestrator | 2025-08-29 15:01:31.832607 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.832610 | orchestrator | Friday 29 August 2025 14:54:06 +0000 (0:00:00.315) 0:04:17.318 ********* 2025-08-29 15:01:31.832614 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832625 | orchestrator | 2025-08-29 15:01:31.832629 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.832633 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.600) 0:04:17.919 ********* 2025-08-29 15:01:31.832637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832644 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832648 | orchestrator | 2025-08-29 15:01:31.832652 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.832656 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.321) 0:04:18.240 ********* 2025-08-29 15:01:31.832659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.832667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.832670 | orchestrator | 2025-08-29 15:01:31.832674 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.832678 | orchestrator | Friday 29 August 2025 14:54:07 +0000 (0:00:00.325) 0:04:18.566 ********* 2025-08-29 15:01:31.832682 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832685 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832689 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832693 | orchestrator | 2025-08-29 15:01:31.832697 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.832700 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.323) 0:04:18.890 ********* 2025-08-29 15:01:31.832704 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832708 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832712 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832715 | orchestrator | 2025-08-29 15:01:31.832719 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.832723 | orchestrator | Friday 29 August 2025 14:54:08 +0000 (0:00:00.627) 0:04:19.517 ********* 2025-08-29 15:01:31.832727 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832731 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832734 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832738 | orchestrator | 2025-08-29 15:01:31.832742 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:01:31.832745 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:00.547) 0:04:20.065 ********* 2025-08-29 15:01:31.832749 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832763 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832768 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832771 | orchestrator | 2025-08-29 15:01:31.832775 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 15:01:31.832779 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:00.347) 0:04:20.412 ********* 2025-08-29 15:01:31.832783 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.832787 | orchestrator | 2025-08-29 15:01:31.832790 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 15:01:31.832794 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.865) 0:04:21.278 ********* 2025-08-29 15:01:31.832798 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.832805 | orchestrator | 2025-08-29 15:01:31.832809 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 15:01:31.832813 | orchestrator | Friday 29 August 2025 14:54:10 +0000 (0:00:00.163) 0:04:21.442 ********* 2025-08-29 15:01:31.832817 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 15:01:31.832821 | orchestrator | 2025-08-29 15:01:31.832827 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 15:01:31.832831 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:01.010) 0:04:22.452 ********* 2025-08-29 15:01:31.832835 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832838 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832842 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832846 | orchestrator | 2025-08-29 15:01:31.832850 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 15:01:31.832853 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.352) 0:04:22.805 ********* 2025-08-29 15:01:31.832857 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832861 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832867 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832871 | orchestrator | 2025-08-29 15:01:31.832875 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 15:01:31.832879 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.461) 0:04:23.267 ********* 2025-08-29 15:01:31.832882 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.832886 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.832890 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.832894 | orchestrator | 2025-08-29 15:01:31.832897 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 15:01:31.832901 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:01.211) 0:04:24.478 ********* 2025-08-29 15:01:31.832905 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.832909 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.832913 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.832916 | orchestrator | 2025-08-29 15:01:31.832920 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 15:01:31.832924 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.685) 0:04:25.164 ********* 2025-08-29 15:01:31.832928 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.832932 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.832935 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.832939 | orchestrator | 2025-08-29 15:01:31.832943 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 15:01:31.832947 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.655) 0:04:25.819 ********* 2025-08-29 15:01:31.832951 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832954 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.832958 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.832962 | orchestrator | 2025-08-29 15:01:31.832966 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 15:01:31.832969 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.928) 0:04:26.748 ********* 2025-08-29 15:01:31.832973 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.832977 | orchestrator | 2025-08-29 15:01:31.832981 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 15:01:31.832984 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:01.176) 0:04:27.925 ********* 2025-08-29 15:01:31.832988 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.832992 | orchestrator | 2025-08-29 15:01:31.832996 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 15:01:31.833000 | orchestrator | Friday 29 August 2025 14:54:17 +0000 (0:00:00.690) 0:04:28.615 ********* 2025-08-29 15:01:31.833004 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:01:31.833007 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.833011 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.833018 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:01:31.833022 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:01:31.833025 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 15:01:31.833029 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:01:31.833033 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 15:01:31.833037 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 15:01:31.833041 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 15:01:31.833045 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:01:31.833049 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 15:01:31.833053 | orchestrator | 2025-08-29 15:01:31.833056 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 15:01:31.833060 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:03.243) 0:04:31.858 ********* 2025-08-29 15:01:31.833064 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833068 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833072 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833075 | orchestrator | 2025-08-29 15:01:31.833079 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 15:01:31.833083 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:01.182) 0:04:33.041 ********* 2025-08-29 15:01:31.833087 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833090 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833094 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833098 | orchestrator | 2025-08-29 15:01:31.833102 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 15:01:31.833105 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:00.599) 0:04:33.641 ********* 2025-08-29 15:01:31.833109 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833113 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833117 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833121 | orchestrator | 2025-08-29 15:01:31.833124 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 15:01:31.833128 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:00.325) 0:04:33.966 ********* 2025-08-29 15:01:31.833132 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833136 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833140 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833143 | orchestrator | 2025-08-29 15:01:31.833147 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 15:01:31.833153 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:01.661) 0:04:35.627 ********* 2025-08-29 15:01:31.833157 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833161 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833165 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833168 | orchestrator | 2025-08-29 15:01:31.833172 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 15:01:31.833176 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:01.416) 0:04:37.044 ********* 2025-08-29 15:01:31.833180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833184 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833187 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833191 | orchestrator | 2025-08-29 15:01:31.833197 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 15:01:31.833201 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.325) 0:04:37.369 ********* 2025-08-29 15:01:31.833205 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.833209 | orchestrator | 2025-08-29 15:01:31.833213 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 15:01:31.833219 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.787) 0:04:38.157 ********* 2025-08-29 15:01:31.833223 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833231 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833235 | orchestrator | 2025-08-29 15:01:31.833238 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 15:01:31.833242 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.334) 0:04:38.491 ********* 2025-08-29 15:01:31.833246 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833250 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833254 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833257 | orchestrator | 2025-08-29 15:01:31.833261 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 15:01:31.833265 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:00.334) 0:04:38.826 ********* 2025-08-29 15:01:31.833269 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.833273 | orchestrator | 2025-08-29 15:01:31.833277 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 15:01:31.833280 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:00.814) 0:04:39.641 ********* 2025-08-29 15:01:31.833284 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833288 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833292 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833295 | orchestrator | 2025-08-29 15:01:31.833299 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 15:01:31.833303 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:01.933) 0:04:41.575 ********* 2025-08-29 15:01:31.833307 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833310 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833314 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833318 | orchestrator | 2025-08-29 15:01:31.833322 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 15:01:31.833325 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:01.356) 0:04:42.932 ********* 2025-08-29 15:01:31.833329 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833333 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833337 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833340 | orchestrator | 2025-08-29 15:01:31.833344 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 15:01:31.833348 | orchestrator | Friday 29 August 2025 14:54:34 +0000 (0:00:01.949) 0:04:44.881 ********* 2025-08-29 15:01:31.833352 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.833355 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.833359 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.833363 | orchestrator | 2025-08-29 15:01:31.833367 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 15:01:31.833371 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:02.956) 0:04:47.838 ********* 2025-08-29 15:01:31.833374 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.833378 | orchestrator | 2025-08-29 15:01:31.833382 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 15:01:31.833386 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:00.489) 0:04:48.327 ********* 2025-08-29 15:01:31.833390 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-08-29 15:01:31.833393 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833397 | orchestrator | 2025-08-29 15:01:31.833401 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 15:01:31.833405 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:22.057) 0:05:10.385 ********* 2025-08-29 15:01:31.833408 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833415 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833419 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833423 | orchestrator | 2025-08-29 15:01:31.833427 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 15:01:31.833431 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:09.929) 0:05:20.315 ********* 2025-08-29 15:01:31.833434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833446 | orchestrator | 2025-08-29 15:01:31.833449 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 15:01:31.833453 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:00.346) 0:05:20.661 ********* 2025-08-29 15:01:31.833461 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:01:31.833478 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:01:31.833484 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 15:01:31.833489 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 15:01:31.833494 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 15:01:31.833498 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__eec8cbcbff7c71bbb935a341559bd73da36e6033'}])  2025-08-29 15:01:31.833504 | orchestrator | 2025-08-29 15:01:31.833507 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:01:31.833511 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:14.971) 0:05:35.633 ********* 2025-08-29 15:01:31.833515 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833519 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833526 | orchestrator | 2025-08-29 15:01:31.833530 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:01:31.833534 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:00.388) 0:05:36.021 ********* 2025-08-29 15:01:31.833538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.833545 | orchestrator | 2025-08-29 15:01:31.833549 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:01:31.833553 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:00.590) 0:05:36.612 ********* 2025-08-29 15:01:31.833557 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833560 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833564 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833568 | orchestrator | 2025-08-29 15:01:31.833572 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:01:31.833575 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:00.567) 0:05:37.179 ********* 2025-08-29 15:01:31.833579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833583 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833591 | orchestrator | 2025-08-29 15:01:31.833594 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:01:31.833598 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:00.450) 0:05:37.630 ********* 2025-08-29 15:01:31.833602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:01:31.833606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:01:31.833610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:01:31.833614 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833617 | orchestrator | 2025-08-29 15:01:31.833621 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:01:31.833625 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:00.641) 0:05:38.271 ********* 2025-08-29 15:01:31.833629 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833633 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833636 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833640 | orchestrator | 2025-08-29 15:01:31.833644 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 15:01:31.833648 | orchestrator | 2025-08-29 15:01:31.833652 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.833658 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:00.551) 0:05:38.823 ********* 2025-08-29 15:01:31.833662 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.833666 | orchestrator | 2025-08-29 15:01:31.833669 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.833673 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:00.808) 0:05:39.632 ********* 2025-08-29 15:01:31.833680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.833684 | orchestrator | 2025-08-29 15:01:31.833688 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.833691 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.509) 0:05:40.141 ********* 2025-08-29 15:01:31.833695 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833699 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833703 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833707 | orchestrator | 2025-08-29 15:01:31.833710 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.833714 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:01.012) 0:05:41.154 ********* 2025-08-29 15:01:31.833718 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833725 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833729 | orchestrator | 2025-08-29 15:01:31.833733 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.833737 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.339) 0:05:41.493 ********* 2025-08-29 15:01:31.833744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833748 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833751 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833785 | orchestrator | 2025-08-29 15:01:31.833790 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.833794 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.314) 0:05:41.808 ********* 2025-08-29 15:01:31.833797 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833801 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833805 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833809 | orchestrator | 2025-08-29 15:01:31.833813 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.833816 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.336) 0:05:42.144 ********* 2025-08-29 15:01:31.833820 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833824 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833828 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833832 | orchestrator | 2025-08-29 15:01:31.833835 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.833839 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:01.042) 0:05:43.186 ********* 2025-08-29 15:01:31.833843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833851 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833855 | orchestrator | 2025-08-29 15:01:31.833858 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.833862 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.381) 0:05:43.568 ********* 2025-08-29 15:01:31.833866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833870 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833873 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833877 | orchestrator | 2025-08-29 15:01:31.833881 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.833885 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:00.334) 0:05:43.903 ********* 2025-08-29 15:01:31.833888 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833892 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833896 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833900 | orchestrator | 2025-08-29 15:01:31.833904 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.833907 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:00.732) 0:05:44.635 ********* 2025-08-29 15:01:31.833911 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833915 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833919 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833923 | orchestrator | 2025-08-29 15:01:31.833927 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.833931 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:01.257) 0:05:45.893 ********* 2025-08-29 15:01:31.833934 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833938 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833942 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833946 | orchestrator | 2025-08-29 15:01:31.833949 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.833953 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:00.426) 0:05:46.319 ********* 2025-08-29 15:01:31.833957 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.833961 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.833965 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.833968 | orchestrator | 2025-08-29 15:01:31.833972 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.833976 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.463) 0:05:46.783 ********* 2025-08-29 15:01:31.833980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.833987 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.833991 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.833995 | orchestrator | 2025-08-29 15:01:31.833998 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.834002 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.296) 0:05:47.080 ********* 2025-08-29 15:01:31.834006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834010 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834033 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834038 | orchestrator | 2025-08-29 15:01:31.834042 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.834049 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.418) 0:05:47.498 ********* 2025-08-29 15:01:31.834053 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834057 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834061 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834064 | orchestrator | 2025-08-29 15:01:31.834068 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.834072 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.285) 0:05:47.784 ********* 2025-08-29 15:01:31.834076 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834080 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834087 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834090 | orchestrator | 2025-08-29 15:01:31.834094 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.834098 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.263) 0:05:48.047 ********* 2025-08-29 15:01:31.834102 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834110 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834113 | orchestrator | 2025-08-29 15:01:31.834117 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.834121 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:00.257) 0:05:48.304 ********* 2025-08-29 15:01:31.834125 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.834129 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.834132 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834136 | orchestrator | 2025-08-29 15:01:31.834140 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.834144 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.439) 0:05:48.744 ********* 2025-08-29 15:01:31.834147 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.834151 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.834155 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834159 | orchestrator | 2025-08-29 15:01:31.834162 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.834166 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.293) 0:05:49.038 ********* 2025-08-29 15:01:31.834170 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.834174 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.834178 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834181 | orchestrator | 2025-08-29 15:01:31.834185 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:01:31.834189 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.642) 0:05:49.680 ********* 2025-08-29 15:01:31.834193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:01:31.834197 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:01:31.834201 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:01:31.834205 | orchestrator | 2025-08-29 15:01:31.834208 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 15:01:31.834212 | orchestrator | Friday 29 August 2025 14:55:39 +0000 (0:00:00.899) 0:05:50.580 ********* 2025-08-29 15:01:31.834216 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.834223 | orchestrator | 2025-08-29 15:01:31.834227 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 15:01:31.834231 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:00.772) 0:05:51.352 ********* 2025-08-29 15:01:31.834235 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834238 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834242 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834246 | orchestrator | 2025-08-29 15:01:31.834250 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 15:01:31.834254 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:00.760) 0:05:52.112 ********* 2025-08-29 15:01:31.834257 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834261 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834265 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834269 | orchestrator | 2025-08-29 15:01:31.834275 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 15:01:31.834282 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:00.322) 0:05:52.435 ********* 2025-08-29 15:01:31.834287 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:01:31.834294 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:01:31.834300 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:01:31.834306 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:01:31.834311 | orchestrator | 2025-08-29 15:01:31.834317 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 15:01:31.834324 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:10.892) 0:06:03.327 ********* 2025-08-29 15:01:31.834330 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.834336 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.834343 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834347 | orchestrator | 2025-08-29 15:01:31.834351 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 15:01:31.834354 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:00.611) 0:06:03.939 ********* 2025-08-29 15:01:31.834358 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:01:31.834362 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:01:31.834366 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:01:31.834370 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:01:31.834373 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.834377 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.834381 | orchestrator | 2025-08-29 15:01:31.834385 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:01:31.834388 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:02.133) 0:06:06.072 ********* 2025-08-29 15:01:31.834396 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:01:31.834400 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:01:31.834404 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:01:31.834407 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:01:31.834411 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 15:01:31.834415 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 15:01:31.834418 | orchestrator | 2025-08-29 15:01:31.834422 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 15:01:31.834429 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:01.283) 0:06:07.356 ********* 2025-08-29 15:01:31.834433 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.834436 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.834440 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834444 | orchestrator | 2025-08-29 15:01:31.834448 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 15:01:31.834458 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.956) 0:06:08.312 ********* 2025-08-29 15:01:31.834461 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834465 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834473 | orchestrator | 2025-08-29 15:01:31.834476 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 15:01:31.834480 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:00.578) 0:06:08.891 ********* 2025-08-29 15:01:31.834484 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834488 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834491 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834495 | orchestrator | 2025-08-29 15:01:31.834499 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 15:01:31.834503 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:00.326) 0:06:09.217 ********* 2025-08-29 15:01:31.834506 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.834510 | orchestrator | 2025-08-29 15:01:31.834514 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 15:01:31.834518 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.515) 0:06:09.732 ********* 2025-08-29 15:01:31.834521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834529 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834533 | orchestrator | 2025-08-29 15:01:31.834536 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 15:01:31.834540 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.595) 0:06:10.328 ********* 2025-08-29 15:01:31.834544 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834548 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834552 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.834556 | orchestrator | 2025-08-29 15:01:31.834559 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 15:01:31.834563 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.355) 0:06:10.683 ********* 2025-08-29 15:01:31.834567 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.834571 | orchestrator | 2025-08-29 15:01:31.834575 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 15:01:31.834579 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:00.555) 0:06:11.239 ********* 2025-08-29 15:01:31.834582 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834586 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834590 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834594 | orchestrator | 2025-08-29 15:01:31.834598 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 15:01:31.834601 | orchestrator | Friday 29 August 2025 14:56:02 +0000 (0:00:01.518) 0:06:12.758 ********* 2025-08-29 15:01:31.834605 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834609 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834612 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834616 | orchestrator | 2025-08-29 15:01:31.834620 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 15:01:31.834624 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:01.194) 0:06:13.952 ********* 2025-08-29 15:01:31.834628 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834631 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834635 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834639 | orchestrator | 2025-08-29 15:01:31.834643 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 15:01:31.834647 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:01.725) 0:06:15.678 ********* 2025-08-29 15:01:31.834651 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834658 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834661 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834665 | orchestrator | 2025-08-29 15:01:31.834669 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 15:01:31.834673 | orchestrator | Friday 29 August 2025 14:56:06 +0000 (0:00:02.006) 0:06:17.684 ********* 2025-08-29 15:01:31.834677 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834680 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.834684 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 15:01:31.834688 | orchestrator | 2025-08-29 15:01:31.834692 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 15:01:31.834695 | orchestrator | Friday 29 August 2025 14:56:07 +0000 (0:00:00.676) 0:06:18.361 ********* 2025-08-29 15:01:31.834699 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 15:01:31.834703 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 15:01:31.834709 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 15:01:31.834713 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 15:01:31.834717 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-08-29 15:01:31.834721 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.834725 | orchestrator | 2025-08-29 15:01:31.834728 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 15:01:31.834735 | orchestrator | Friday 29 August 2025 14:56:38 +0000 (0:00:30.575) 0:06:48.937 ********* 2025-08-29 15:01:31.834739 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.834743 | orchestrator | 2025-08-29 15:01:31.834746 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 15:01:31.834750 | orchestrator | Friday 29 August 2025 14:56:39 +0000 (0:00:01.372) 0:06:50.310 ********* 2025-08-29 15:01:31.834765 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834769 | orchestrator | 2025-08-29 15:01:31.834773 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 15:01:31.834777 | orchestrator | Friday 29 August 2025 14:56:39 +0000 (0:00:00.329) 0:06:50.639 ********* 2025-08-29 15:01:31.834781 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834785 | orchestrator | 2025-08-29 15:01:31.834788 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 15:01:31.834792 | orchestrator | Friday 29 August 2025 14:56:40 +0000 (0:00:00.161) 0:06:50.801 ********* 2025-08-29 15:01:31.834796 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 15:01:31.834800 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 15:01:31.834803 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 15:01:31.834807 | orchestrator | 2025-08-29 15:01:31.834811 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 15:01:31.834815 | orchestrator | Friday 29 August 2025 14:56:46 +0000 (0:00:06.382) 0:06:57.184 ********* 2025-08-29 15:01:31.834819 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 15:01:31.834822 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 15:01:31.834826 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 15:01:31.834830 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 15:01:31.834834 | orchestrator | 2025-08-29 15:01:31.834840 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:01:31.834850 | orchestrator | Friday 29 August 2025 14:56:51 +0000 (0:00:04.943) 0:07:02.127 ********* 2025-08-29 15:01:31.834855 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834861 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834866 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834872 | orchestrator | 2025-08-29 15:01:31.834877 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:01:31.834883 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.688) 0:07:02.816 ********* 2025-08-29 15:01:31.834888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:01:31.834893 | orchestrator | 2025-08-29 15:01:31.834899 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:01:31.834905 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.512) 0:07:03.328 ********* 2025-08-29 15:01:31.834910 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.834915 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.834921 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.834927 | orchestrator | 2025-08-29 15:01:31.834932 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:01:31.834937 | orchestrator | Friday 29 August 2025 14:56:53 +0000 (0:00:00.651) 0:07:03.980 ********* 2025-08-29 15:01:31.834943 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.834948 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.834954 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.834959 | orchestrator | 2025-08-29 15:01:31.834965 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:01:31.834971 | orchestrator | Friday 29 August 2025 14:56:54 +0000 (0:00:01.335) 0:07:05.315 ********* 2025-08-29 15:01:31.834977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:01:31.834983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:01:31.834988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:01:31.834994 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.834999 | orchestrator | 2025-08-29 15:01:31.835005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:01:31.835011 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:00.627) 0:07:05.943 ********* 2025-08-29 15:01:31.835016 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.835021 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.835026 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.835031 | orchestrator | 2025-08-29 15:01:31.835037 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 15:01:31.835042 | orchestrator | 2025-08-29 15:01:31.835048 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.835054 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:00.841) 0:07:06.784 ********* 2025-08-29 15:01:31.835060 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.835067 | orchestrator | 2025-08-29 15:01:31.835073 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.835085 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:00.569) 0:07:07.354 ********* 2025-08-29 15:01:31.835092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.835099 | orchestrator | 2025-08-29 15:01:31.835105 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.835109 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:00.769) 0:07:08.123 ********* 2025-08-29 15:01:31.835113 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835117 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835120 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835129 | orchestrator | 2025-08-29 15:01:31.835133 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.835144 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:00.307) 0:07:08.431 ********* 2025-08-29 15:01:31.835148 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835152 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835156 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835160 | orchestrator | 2025-08-29 15:01:31.835163 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.835167 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:00.653) 0:07:09.084 ********* 2025-08-29 15:01:31.835171 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835175 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835178 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835182 | orchestrator | 2025-08-29 15:01:31.835186 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.835190 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:00.816) 0:07:09.901 ********* 2025-08-29 15:01:31.835193 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835197 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835201 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835204 | orchestrator | 2025-08-29 15:01:31.835208 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.835212 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.944) 0:07:10.846 ********* 2025-08-29 15:01:31.835216 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835220 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835224 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835227 | orchestrator | 2025-08-29 15:01:31.835231 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.835235 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.331) 0:07:11.178 ********* 2025-08-29 15:01:31.835239 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835246 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835250 | orchestrator | 2025-08-29 15:01:31.835254 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.835258 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.334) 0:07:11.512 ********* 2025-08-29 15:01:31.835261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835265 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835269 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835273 | orchestrator | 2025-08-29 15:01:31.835277 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.835280 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.324) 0:07:11.836 ********* 2025-08-29 15:01:31.835284 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835288 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835292 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835295 | orchestrator | 2025-08-29 15:01:31.835299 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.835303 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.995) 0:07:12.832 ********* 2025-08-29 15:01:31.835307 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835311 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835314 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835318 | orchestrator | 2025-08-29 15:01:31.835322 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.835326 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.731) 0:07:13.563 ********* 2025-08-29 15:01:31.835330 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835334 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835337 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835341 | orchestrator | 2025-08-29 15:01:31.835345 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.835349 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:00.353) 0:07:13.917 ********* 2025-08-29 15:01:31.835356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835360 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835364 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835368 | orchestrator | 2025-08-29 15:01:31.835371 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.835375 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:00.297) 0:07:14.215 ********* 2025-08-29 15:01:31.835379 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835382 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835386 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835390 | orchestrator | 2025-08-29 15:01:31.835394 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.835398 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.591) 0:07:14.806 ********* 2025-08-29 15:01:31.835401 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835405 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835409 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835412 | orchestrator | 2025-08-29 15:01:31.835416 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.835420 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.349) 0:07:15.156 ********* 2025-08-29 15:01:31.835424 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835427 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835431 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835435 | orchestrator | 2025-08-29 15:01:31.835439 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.835442 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.343) 0:07:15.499 ********* 2025-08-29 15:01:31.835450 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835457 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835461 | orchestrator | 2025-08-29 15:01:31.835465 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.835469 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.336) 0:07:15.836 ********* 2025-08-29 15:01:31.835473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835484 | orchestrator | 2025-08-29 15:01:31.835488 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.835495 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.316) 0:07:16.153 ********* 2025-08-29 15:01:31.835498 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835502 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835506 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835510 | orchestrator | 2025-08-29 15:01:31.835513 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.835517 | orchestrator | Friday 29 August 2025 14:57:06 +0000 (0:00:00.685) 0:07:16.838 ********* 2025-08-29 15:01:31.835521 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835525 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835528 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835532 | orchestrator | 2025-08-29 15:01:31.835536 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.835540 | orchestrator | Friday 29 August 2025 14:57:06 +0000 (0:00:00.439) 0:07:17.278 ********* 2025-08-29 15:01:31.835544 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835547 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835551 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835555 | orchestrator | 2025-08-29 15:01:31.835559 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 15:01:31.835562 | orchestrator | Friday 29 August 2025 14:57:07 +0000 (0:00:00.601) 0:07:17.879 ********* 2025-08-29 15:01:31.835566 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835573 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835577 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835581 | orchestrator | 2025-08-29 15:01:31.835584 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:01:31.835588 | orchestrator | Friday 29 August 2025 14:57:07 +0000 (0:00:00.648) 0:07:18.528 ********* 2025-08-29 15:01:31.835592 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:01:31.835596 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:01:31.835599 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:01:31.835603 | orchestrator | 2025-08-29 15:01:31.835607 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 15:01:31.835611 | orchestrator | Friday 29 August 2025 14:57:08 +0000 (0:00:00.623) 0:07:19.152 ********* 2025-08-29 15:01:31.835615 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.835618 | orchestrator | 2025-08-29 15:01:31.835622 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 15:01:31.835626 | orchestrator | Friday 29 August 2025 14:57:08 +0000 (0:00:00.546) 0:07:19.699 ********* 2025-08-29 15:01:31.835630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835641 | orchestrator | 2025-08-29 15:01:31.835645 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 15:01:31.835649 | orchestrator | Friday 29 August 2025 14:57:09 +0000 (0:00:00.552) 0:07:20.251 ********* 2025-08-29 15:01:31.835652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835656 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835660 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835664 | orchestrator | 2025-08-29 15:01:31.835668 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 15:01:31.835671 | orchestrator | Friday 29 August 2025 14:57:09 +0000 (0:00:00.326) 0:07:20.578 ********* 2025-08-29 15:01:31.835675 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835679 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835683 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835686 | orchestrator | 2025-08-29 15:01:31.835690 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 15:01:31.835694 | orchestrator | Friday 29 August 2025 14:57:10 +0000 (0:00:00.605) 0:07:21.183 ********* 2025-08-29 15:01:31.835698 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.835701 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.835705 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.835709 | orchestrator | 2025-08-29 15:01:31.835712 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 15:01:31.835716 | orchestrator | Friday 29 August 2025 14:57:10 +0000 (0:00:00.342) 0:07:21.526 ********* 2025-08-29 15:01:31.835720 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:01:31.835724 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:01:31.835728 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:01:31.835731 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:01:31.835735 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:01:31.835739 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:01:31.835743 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:01:31.835749 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:01:31.835774 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:01:31.835779 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:01:31.835783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:01:31.835787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:01:31.835793 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:01:31.835797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:01:31.835800 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:01:31.835804 | orchestrator | 2025-08-29 15:01:31.835808 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 15:01:31.835812 | orchestrator | Friday 29 August 2025 14:57:13 +0000 (0:00:02.465) 0:07:23.992 ********* 2025-08-29 15:01:31.835816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.835820 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.835825 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.835831 | orchestrator | 2025-08-29 15:01:31.835836 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 15:01:31.835842 | orchestrator | Friday 29 August 2025 14:57:13 +0000 (0:00:00.341) 0:07:24.333 ********* 2025-08-29 15:01:31.835847 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.835853 | orchestrator | 2025-08-29 15:01:31.835858 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 15:01:31.835863 | orchestrator | Friday 29 August 2025 14:57:14 +0000 (0:00:00.529) 0:07:24.863 ********* 2025-08-29 15:01:31.835869 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:01:31.835874 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:01:31.835879 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:01:31.835885 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 15:01:31.835891 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 15:01:31.835896 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 15:01:31.835902 | orchestrator | 2025-08-29 15:01:31.835907 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 15:01:31.835912 | orchestrator | Friday 29 August 2025 14:57:15 +0000 (0:00:01.302) 0:07:26.165 ********* 2025-08-29 15:01:31.835918 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.835924 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:01:31.835931 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:01:31.835937 | orchestrator | 2025-08-29 15:01:31.835943 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:01:31.835949 | orchestrator | Friday 29 August 2025 14:57:17 +0000 (0:00:02.191) 0:07:28.357 ********* 2025-08-29 15:01:31.835955 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:01:31.835962 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:01:31.835968 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.835972 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:01:31.835976 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:01:31.835980 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.835983 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:01:31.835987 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:01:31.835991 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.835994 | orchestrator | 2025-08-29 15:01:31.835998 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 15:01:31.836007 | orchestrator | Friday 29 August 2025 14:57:18 +0000 (0:00:01.214) 0:07:29.571 ********* 2025-08-29 15:01:31.836011 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.836015 | orchestrator | 2025-08-29 15:01:31.836018 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 15:01:31.836022 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:02.067) 0:07:31.639 ********* 2025-08-29 15:01:31.836026 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.836030 | orchestrator | 2025-08-29 15:01:31.836033 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 15:01:31.836037 | orchestrator | Friday 29 August 2025 14:57:21 +0000 (0:00:00.542) 0:07:32.182 ********* 2025-08-29 15:01:31.836041 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5', 'data_vg': 'ceph-bbd8d281-36ff-5086-a3ca-2bb41bb9eed5'}) 2025-08-29 15:01:31.836045 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-346e0f34-2e25-5bf0-9181-de3fb405aafc', 'data_vg': 'ceph-346e0f34-2e25-5bf0-9181-de3fb405aafc'}) 2025-08-29 15:01:31.836049 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dda150a8-39d5-5493-abc9-b03fdb7d62e3', 'data_vg': 'ceph-dda150a8-39d5-5493-abc9-b03fdb7d62e3'}) 2025-08-29 15:01:31.836138 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ca3f02ac-b393-504d-bf7e-2b1a4059feca', 'data_vg': 'ceph-ca3f02ac-b393-504d-bf7e-2b1a4059feca'}) 2025-08-29 15:01:31.836143 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d9c5dbd3-dfd6-59a8-a565-791b79996791', 'data_vg': 'ceph-d9c5dbd3-dfd6-59a8-a565-791b79996791'}) 2025-08-29 15:01:31.836147 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c0ce2805-49d2-5cc8-844e-183b484fa1c4', 'data_vg': 'ceph-c0ce2805-49d2-5cc8-844e-183b484fa1c4'}) 2025-08-29 15:01:31.836151 | orchestrator | 2025-08-29 15:01:31.836155 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 15:01:31.836161 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:42.045) 0:08:14.227 ********* 2025-08-29 15:01:31.836165 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836173 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836177 | orchestrator | 2025-08-29 15:01:31.836181 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 15:01:31.836184 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:00.358) 0:08:14.586 ********* 2025-08-29 15:01:31.836188 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.836192 | orchestrator | 2025-08-29 15:01:31.836196 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 15:01:31.836200 | orchestrator | Friday 29 August 2025 14:58:04 +0000 (0:00:00.565) 0:08:15.152 ********* 2025-08-29 15:01:31.836203 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.836207 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.836211 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.836215 | orchestrator | 2025-08-29 15:01:31.836219 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 15:01:31.836222 | orchestrator | Friday 29 August 2025 14:58:05 +0000 (0:00:01.032) 0:08:16.184 ********* 2025-08-29 15:01:31.836226 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.836230 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.836234 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.836238 | orchestrator | 2025-08-29 15:01:31.836242 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 15:01:31.836245 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:02.533) 0:08:18.717 ********* 2025-08-29 15:01:31.836249 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.836257 | orchestrator | 2025-08-29 15:01:31.836261 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 15:01:31.836264 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.559) 0:08:19.276 ********* 2025-08-29 15:01:31.836268 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.836272 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.836276 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.836280 | orchestrator | 2025-08-29 15:01:31.836283 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 15:01:31.836287 | orchestrator | Friday 29 August 2025 14:58:10 +0000 (0:00:01.496) 0:08:20.773 ********* 2025-08-29 15:01:31.836291 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.836295 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.836299 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.836302 | orchestrator | 2025-08-29 15:01:31.836306 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 15:01:31.836310 | orchestrator | Friday 29 August 2025 14:58:11 +0000 (0:00:01.163) 0:08:21.936 ********* 2025-08-29 15:01:31.836314 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.836318 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.836321 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.836325 | orchestrator | 2025-08-29 15:01:31.836329 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 15:01:31.836333 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:01.968) 0:08:23.905 ********* 2025-08-29 15:01:31.836336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836340 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836344 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836348 | orchestrator | 2025-08-29 15:01:31.836351 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 15:01:31.836357 | orchestrator | Friday 29 August 2025 14:58:13 +0000 (0:00:00.346) 0:08:24.252 ********* 2025-08-29 15:01:31.836363 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836368 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836374 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836379 | orchestrator | 2025-08-29 15:01:31.836385 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 15:01:31.836390 | orchestrator | Friday 29 August 2025 14:58:14 +0000 (0:00:00.598) 0:08:24.851 ********* 2025-08-29 15:01:31.836396 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:01:31.836402 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-08-29 15:01:31.836407 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-08-29 15:01:31.836413 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-08-29 15:01:31.836419 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-08-29 15:01:31.836426 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-08-29 15:01:31.836432 | orchestrator | 2025-08-29 15:01:31.836438 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 15:01:31.836444 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:01.111) 0:08:25.962 ********* 2025-08-29 15:01:31.836450 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 15:01:31.836456 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 15:01:31.836460 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-08-29 15:01:31.836463 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-08-29 15:01:31.836467 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-08-29 15:01:31.836471 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 15:01:31.836475 | orchestrator | 2025-08-29 15:01:31.836478 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 15:01:31.836486 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:02.287) 0:08:28.250 ********* 2025-08-29 15:01:31.836490 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 15:01:31.836498 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 15:01:31.836502 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-08-29 15:01:31.836505 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-08-29 15:01:31.836509 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-08-29 15:01:31.836513 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 15:01:31.836516 | orchestrator | 2025-08-29 15:01:31.836520 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 15:01:31.836529 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:03.849) 0:08:32.099 ********* 2025-08-29 15:01:31.836533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836537 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836540 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.836544 | orchestrator | 2025-08-29 15:01:31.836548 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 15:01:31.836552 | orchestrator | Friday 29 August 2025 14:58:24 +0000 (0:00:03.464) 0:08:35.564 ********* 2025-08-29 15:01:31.836556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836559 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836563 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 15:01:31.836567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.836571 | orchestrator | 2025-08-29 15:01:31.836575 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 15:01:31.836579 | orchestrator | Friday 29 August 2025 14:58:37 +0000 (0:00:12.360) 0:08:47.925 ********* 2025-08-29 15:01:31.836582 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836586 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836590 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836594 | orchestrator | 2025-08-29 15:01:31.836597 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:01:31.836601 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:01.044) 0:08:48.969 ********* 2025-08-29 15:01:31.836605 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836609 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836616 | orchestrator | 2025-08-29 15:01:31.836620 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:01:31.836623 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:00.358) 0:08:49.328 ********* 2025-08-29 15:01:31.836627 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.836631 | orchestrator | 2025-08-29 15:01:31.836635 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:01:31.836639 | orchestrator | Friday 29 August 2025 14:58:39 +0000 (0:00:00.549) 0:08:49.877 ********* 2025-08-29 15:01:31.836642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.836646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.836650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.836654 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836657 | orchestrator | 2025-08-29 15:01:31.836661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:01:31.836665 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:00.938) 0:08:50.815 ********* 2025-08-29 15:01:31.836669 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836672 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836676 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836680 | orchestrator | 2025-08-29 15:01:31.836683 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:01:31.836687 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:00.330) 0:08:51.146 ********* 2025-08-29 15:01:31.836694 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836698 | orchestrator | 2025-08-29 15:01:31.836702 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:01:31.836706 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:00.226) 0:08:51.372 ********* 2025-08-29 15:01:31.836709 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836713 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836717 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836721 | orchestrator | 2025-08-29 15:01:31.836724 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:01:31.836728 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:00.419) 0:08:51.792 ********* 2025-08-29 15:01:31.836732 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836736 | orchestrator | 2025-08-29 15:01:31.836740 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:01:31.836744 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:00.217) 0:08:52.010 ********* 2025-08-29 15:01:31.836750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836784 | orchestrator | 2025-08-29 15:01:31.836788 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:01:31.836792 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:00.205) 0:08:52.215 ********* 2025-08-29 15:01:31.836796 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836800 | orchestrator | 2025-08-29 15:01:31.836803 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:01:31.836807 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:00.108) 0:08:52.324 ********* 2025-08-29 15:01:31.836811 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836815 | orchestrator | 2025-08-29 15:01:31.836818 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:01:31.836822 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:00.224) 0:08:52.548 ********* 2025-08-29 15:01:31.836826 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836830 | orchestrator | 2025-08-29 15:01:31.836836 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:01:31.836840 | orchestrator | Friday 29 August 2025 14:58:42 +0000 (0:00:00.844) 0:08:53.393 ********* 2025-08-29 15:01:31.836844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.836848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.836852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.836856 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836859 | orchestrator | 2025-08-29 15:01:31.836863 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:01:31.836870 | orchestrator | Friday 29 August 2025 14:58:43 +0000 (0:00:00.429) 0:08:53.823 ********* 2025-08-29 15:01:31.836874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836878 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.836882 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.836886 | orchestrator | 2025-08-29 15:01:31.836889 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:01:31.836893 | orchestrator | Friday 29 August 2025 14:58:43 +0000 (0:00:00.362) 0:08:54.186 ********* 2025-08-29 15:01:31.836897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836901 | orchestrator | 2025-08-29 15:01:31.836904 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:01:31.836908 | orchestrator | Friday 29 August 2025 14:58:43 +0000 (0:00:00.252) 0:08:54.439 ********* 2025-08-29 15:01:31.836912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836916 | orchestrator | 2025-08-29 15:01:31.836920 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 15:01:31.836923 | orchestrator | 2025-08-29 15:01:31.836927 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.836931 | orchestrator | Friday 29 August 2025 14:58:44 +0000 (0:00:00.661) 0:08:55.100 ********* 2025-08-29 15:01:31.836939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.836944 | orchestrator | 2025-08-29 15:01:31.836947 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.836951 | orchestrator | Friday 29 August 2025 14:58:45 +0000 (0:00:01.337) 0:08:56.438 ********* 2025-08-29 15:01:31.836955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.836960 | orchestrator | 2025-08-29 15:01:31.836966 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.836971 | orchestrator | Friday 29 August 2025 14:58:46 +0000 (0:00:01.243) 0:08:57.681 ********* 2025-08-29 15:01:31.836977 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.836983 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.836988 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837000 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837009 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837015 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837021 | orchestrator | 2025-08-29 15:01:31.837026 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.837032 | orchestrator | Friday 29 August 2025 14:58:47 +0000 (0:00:00.941) 0:08:58.623 ********* 2025-08-29 15:01:31.837038 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837043 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837049 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837055 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837060 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837066 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837071 | orchestrator | 2025-08-29 15:01:31.837077 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.837082 | orchestrator | Friday 29 August 2025 14:58:48 +0000 (0:00:00.973) 0:08:59.597 ********* 2025-08-29 15:01:31.837088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837105 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837111 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837117 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837123 | orchestrator | 2025-08-29 15:01:31.837129 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.837135 | orchestrator | Friday 29 August 2025 14:58:50 +0000 (0:00:01.254) 0:09:00.851 ********* 2025-08-29 15:01:31.837142 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837148 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837155 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837159 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837163 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837167 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837170 | orchestrator | 2025-08-29 15:01:31.837174 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.837178 | orchestrator | Friday 29 August 2025 14:58:51 +0000 (0:00:00.992) 0:09:01.844 ********* 2025-08-29 15:01:31.837182 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837185 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837189 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837193 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837197 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837200 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837204 | orchestrator | 2025-08-29 15:01:31.837208 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.837212 | orchestrator | Friday 29 August 2025 14:58:52 +0000 (0:00:01.121) 0:09:02.965 ********* 2025-08-29 15:01:31.837221 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837225 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837229 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837232 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837236 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837240 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837244 | orchestrator | 2025-08-29 15:01:31.837252 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.837256 | orchestrator | Friday 29 August 2025 14:58:52 +0000 (0:00:00.609) 0:09:03.575 ********* 2025-08-29 15:01:31.837260 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837263 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837267 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837271 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837275 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837282 | orchestrator | 2025-08-29 15:01:31.837286 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.837293 | orchestrator | Friday 29 August 2025 14:58:53 +0000 (0:00:00.937) 0:09:04.512 ********* 2025-08-29 15:01:31.837297 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837301 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837305 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837309 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837313 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837316 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837320 | orchestrator | 2025-08-29 15:01:31.837324 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.837328 | orchestrator | Friday 29 August 2025 14:58:54 +0000 (0:00:01.073) 0:09:05.586 ********* 2025-08-29 15:01:31.837331 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837335 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837339 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837343 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837346 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837350 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837354 | orchestrator | 2025-08-29 15:01:31.837357 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.837361 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:01.412) 0:09:06.998 ********* 2025-08-29 15:01:31.837365 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837369 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837376 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837380 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837384 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837388 | orchestrator | 2025-08-29 15:01:31.837391 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.837395 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:00.586) 0:09:07.585 ********* 2025-08-29 15:01:31.837399 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837403 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837407 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837410 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837414 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837422 | orchestrator | 2025-08-29 15:01:31.837425 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.837429 | orchestrator | Friday 29 August 2025 14:58:57 +0000 (0:00:00.867) 0:09:08.452 ********* 2025-08-29 15:01:31.837433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837437 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837441 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837448 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837451 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837455 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837459 | orchestrator | 2025-08-29 15:01:31.837463 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.837466 | orchestrator | Friday 29 August 2025 14:58:58 +0000 (0:00:00.611) 0:09:09.064 ********* 2025-08-29 15:01:31.837470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837474 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837478 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837482 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837485 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837489 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837495 | orchestrator | 2025-08-29 15:01:31.837501 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.837507 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:00.934) 0:09:09.998 ********* 2025-08-29 15:01:31.837513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837518 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837523 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837528 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837534 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837540 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837546 | orchestrator | 2025-08-29 15:01:31.837552 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.837559 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:00.641) 0:09:10.640 ********* 2025-08-29 15:01:31.837564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837570 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837576 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837581 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837588 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837593 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837599 | orchestrator | 2025-08-29 15:01:31.837604 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.837610 | orchestrator | Friday 29 August 2025 14:59:00 +0000 (0:00:00.845) 0:09:11.486 ********* 2025-08-29 15:01:31.837616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:01:31.837622 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:01:31.837628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:01:31.837635 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837640 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837644 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837648 | orchestrator | 2025-08-29 15:01:31.837651 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.837655 | orchestrator | Friday 29 August 2025 14:59:01 +0000 (0:00:00.648) 0:09:12.135 ********* 2025-08-29 15:01:31.837659 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837663 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837667 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837670 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.837674 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.837678 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.837682 | orchestrator | 2025-08-29 15:01:31.837689 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.837693 | orchestrator | Friday 29 August 2025 14:59:02 +0000 (0:00:00.890) 0:09:13.025 ********* 2025-08-29 15:01:31.837697 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837701 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837704 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837708 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837712 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837715 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837719 | orchestrator | 2025-08-29 15:01:31.837723 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.837734 | orchestrator | Friday 29 August 2025 14:59:02 +0000 (0:00:00.637) 0:09:13.663 ********* 2025-08-29 15:01:31.837738 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837742 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.837745 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.837749 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.837768 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.837776 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.837780 | orchestrator | 2025-08-29 15:01:31.837783 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 15:01:31.837787 | orchestrator | Friday 29 August 2025 14:59:04 +0000 (0:00:01.409) 0:09:15.072 ********* 2025-08-29 15:01:31.837791 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.837795 | orchestrator | 2025-08-29 15:01:31.837798 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 15:01:31.837802 | orchestrator | Friday 29 August 2025 14:59:08 +0000 (0:00:04.495) 0:09:19.567 ********* 2025-08-29 15:01:31.837806 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837810 | orchestrator | 2025-08-29 15:01:31.837813 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 15:01:31.837817 | orchestrator | Friday 29 August 2025 14:59:11 +0000 (0:00:02.759) 0:09:22.327 ********* 2025-08-29 15:01:31.837821 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837825 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.837828 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.837832 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.837836 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.837840 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.837843 | orchestrator | 2025-08-29 15:01:31.837847 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 15:01:31.837851 | orchestrator | Friday 29 August 2025 14:59:13 +0000 (0:00:01.622) 0:09:23.949 ********* 2025-08-29 15:01:31.837855 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.837858 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.837862 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.837866 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.837870 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.837873 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.837877 | orchestrator | 2025-08-29 15:01:31.837881 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 15:01:31.837885 | orchestrator | Friday 29 August 2025 14:59:14 +0000 (0:00:01.198) 0:09:25.148 ********* 2025-08-29 15:01:31.837889 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.837895 | orchestrator | 2025-08-29 15:01:31.837898 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 15:01:31.837902 | orchestrator | Friday 29 August 2025 14:59:15 +0000 (0:00:01.299) 0:09:26.448 ********* 2025-08-29 15:01:31.837906 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.837910 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.837913 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.837917 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.837921 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.837925 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.837928 | orchestrator | 2025-08-29 15:01:31.837932 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 15:01:31.837936 | orchestrator | Friday 29 August 2025 14:59:17 +0000 (0:00:01.620) 0:09:28.069 ********* 2025-08-29 15:01:31.837939 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.837943 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.837947 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.837951 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.837954 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.837962 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.837965 | orchestrator | 2025-08-29 15:01:31.837969 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 15:01:31.837973 | orchestrator | Friday 29 August 2025 14:59:20 +0000 (0:00:03.431) 0:09:31.500 ********* 2025-08-29 15:01:31.837977 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.837981 | orchestrator | 2025-08-29 15:01:31.837984 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 15:01:31.837988 | orchestrator | Friday 29 August 2025 14:59:22 +0000 (0:00:01.358) 0:09:32.858 ********* 2025-08-29 15:01:31.837992 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.837996 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.838000 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.838003 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838007 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838011 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838036 | orchestrator | 2025-08-29 15:01:31.838041 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 15:01:31.838045 | orchestrator | Friday 29 August 2025 14:59:23 +0000 (0:00:00.863) 0:09:33.722 ********* 2025-08-29 15:01:31.838048 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:01:31.838052 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:01:31.838056 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:01:31.838060 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.838064 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.838068 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.838071 | orchestrator | 2025-08-29 15:01:31.838075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 15:01:31.838086 | orchestrator | Friday 29 August 2025 14:59:25 +0000 (0:00:02.301) 0:09:36.024 ********* 2025-08-29 15:01:31.838090 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:01:31.838093 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:01:31.838097 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:01:31.838101 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838105 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838108 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838112 | orchestrator | 2025-08-29 15:01:31.838116 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 15:01:31.838120 | orchestrator | 2025-08-29 15:01:31.838124 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.838130 | orchestrator | Friday 29 August 2025 14:59:26 +0000 (0:00:01.118) 0:09:37.142 ********* 2025-08-29 15:01:31.838134 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.838138 | orchestrator | 2025-08-29 15:01:31.838142 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.838146 | orchestrator | Friday 29 August 2025 14:59:27 +0000 (0:00:00.798) 0:09:37.941 ********* 2025-08-29 15:01:31.838150 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5 2025-08-29 15:01:31.838154 | orchestrator | 2025-08-29 15:01:31.838157 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.838161 | orchestrator | Friday 29 August 2025 14:59:27 +0000 (0:00:00.606) 0:09:38.547 ********* 2025-08-29 15:01:31.838165 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838172 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838176 | orchestrator | 2025-08-29 15:01:31.838180 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.838184 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:00.295) 0:09:38.842 ********* 2025-08-29 15:01:31.838188 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838195 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838199 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838202 | orchestrator | 2025-08-29 15:01:31.838206 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.838210 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:01.015) 0:09:39.858 ********* 2025-08-29 15:01:31.838214 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838217 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838221 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838225 | orchestrator | 2025-08-29 15:01:31.838228 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.838232 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.792) 0:09:40.650 ********* 2025-08-29 15:01:31.838236 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838240 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838243 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838247 | orchestrator | 2025-08-29 15:01:31.838251 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.838255 | orchestrator | Friday 29 August 2025 14:59:30 +0000 (0:00:00.789) 0:09:41.440 ********* 2025-08-29 15:01:31.838258 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838262 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838266 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838270 | orchestrator | 2025-08-29 15:01:31.838273 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.838277 | orchestrator | Friday 29 August 2025 14:59:31 +0000 (0:00:00.360) 0:09:41.800 ********* 2025-08-29 15:01:31.838281 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838288 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838292 | orchestrator | 2025-08-29 15:01:31.838296 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.838300 | orchestrator | Friday 29 August 2025 14:59:31 +0000 (0:00:00.660) 0:09:42.460 ********* 2025-08-29 15:01:31.838303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838307 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838311 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838314 | orchestrator | 2025-08-29 15:01:31.838318 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.838322 | orchestrator | Friday 29 August 2025 14:59:32 +0000 (0:00:00.312) 0:09:42.772 ********* 2025-08-29 15:01:31.838326 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838330 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838333 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838337 | orchestrator | 2025-08-29 15:01:31.838341 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.838345 | orchestrator | Friday 29 August 2025 14:59:32 +0000 (0:00:00.745) 0:09:43.518 ********* 2025-08-29 15:01:31.838349 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838352 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838356 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838360 | orchestrator | 2025-08-29 15:01:31.838364 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.838367 | orchestrator | Friday 29 August 2025 14:59:33 +0000 (0:00:00.753) 0:09:44.271 ********* 2025-08-29 15:01:31.838371 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838375 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838379 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838382 | orchestrator | 2025-08-29 15:01:31.838386 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.838390 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:00.580) 0:09:44.851 ********* 2025-08-29 15:01:31.838394 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838399 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838405 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838415 | orchestrator | 2025-08-29 15:01:31.838421 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.838426 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:00.297) 0:09:45.148 ********* 2025-08-29 15:01:31.838435 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838440 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838446 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838452 | orchestrator | 2025-08-29 15:01:31.838458 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.838464 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:00.321) 0:09:45.470 ********* 2025-08-29 15:01:31.838470 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838476 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838482 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838489 | orchestrator | 2025-08-29 15:01:31.838493 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.838500 | orchestrator | Friday 29 August 2025 14:59:35 +0000 (0:00:00.428) 0:09:45.899 ********* 2025-08-29 15:01:31.838504 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838508 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838512 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838516 | orchestrator | 2025-08-29 15:01:31.838519 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.838523 | orchestrator | Friday 29 August 2025 14:59:35 +0000 (0:00:00.733) 0:09:46.633 ********* 2025-08-29 15:01:31.838527 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838531 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838535 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838538 | orchestrator | 2025-08-29 15:01:31.838542 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.838546 | orchestrator | Friday 29 August 2025 14:59:36 +0000 (0:00:00.472) 0:09:47.105 ********* 2025-08-29 15:01:31.838549 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838553 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838557 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838560 | orchestrator | 2025-08-29 15:01:31.838564 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.838568 | orchestrator | Friday 29 August 2025 14:59:36 +0000 (0:00:00.485) 0:09:47.591 ********* 2025-08-29 15:01:31.838572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838575 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838579 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838583 | orchestrator | 2025-08-29 15:01:31.838587 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.838590 | orchestrator | Friday 29 August 2025 14:59:37 +0000 (0:00:00.425) 0:09:48.017 ********* 2025-08-29 15:01:31.838594 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838598 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838602 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838605 | orchestrator | 2025-08-29 15:01:31.838609 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.838613 | orchestrator | Friday 29 August 2025 14:59:38 +0000 (0:00:00.703) 0:09:48.720 ********* 2025-08-29 15:01:31.838616 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.838620 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.838624 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.838628 | orchestrator | 2025-08-29 15:01:31.838632 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 15:01:31.838635 | orchestrator | Friday 29 August 2025 14:59:38 +0000 (0:00:00.591) 0:09:49.312 ********* 2025-08-29 15:01:31.838639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838646 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 15:01:31.838650 | orchestrator | 2025-08-29 15:01:31.838658 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 15:01:31.838662 | orchestrator | Friday 29 August 2025 14:59:39 +0000 (0:00:00.769) 0:09:50.081 ********* 2025-08-29 15:01:31.838665 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.838669 | orchestrator | 2025-08-29 15:01:31.838673 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 15:01:31.838677 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:02.225) 0:09:52.306 ********* 2025-08-29 15:01:31.838682 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 15:01:31.838688 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838692 | orchestrator | 2025-08-29 15:01:31.838695 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 15:01:31.838699 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:00.213) 0:09:52.520 ********* 2025-08-29 15:01:31.838704 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:01:31.838712 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:01:31.838716 | orchestrator | 2025-08-29 15:01:31.838720 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 15:01:31.838724 | orchestrator | Friday 29 August 2025 14:59:50 +0000 (0:00:08.451) 0:10:00.972 ********* 2025-08-29 15:01:31.838728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:01:31.838731 | orchestrator | 2025-08-29 15:01:31.838735 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 15:01:31.838739 | orchestrator | Friday 29 August 2025 14:59:54 +0000 (0:00:03.755) 0:10:04.727 ********* 2025-08-29 15:01:31.838745 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.838750 | orchestrator | 2025-08-29 15:01:31.838773 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 15:01:31.838777 | orchestrator | Friday 29 August 2025 14:59:55 +0000 (0:00:00.987) 0:10:05.715 ********* 2025-08-29 15:01:31.838781 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:01:31.838785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 15:01:31.838794 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:01:31.838798 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:01:31.838801 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 15:01:31.838805 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 15:01:31.838809 | orchestrator | 2025-08-29 15:01:31.838813 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 15:01:31.838816 | orchestrator | Friday 29 August 2025 14:59:56 +0000 (0:00:01.530) 0:10:07.245 ********* 2025-08-29 15:01:31.838820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.838824 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:01:31.838828 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:01:31.838832 | orchestrator | 2025-08-29 15:01:31.838836 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:01:31.838843 | orchestrator | Friday 29 August 2025 14:59:58 +0000 (0:00:02.401) 0:10:09.646 ********* 2025-08-29 15:01:31.838847 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:01:31.838854 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:01:31.838859 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.838865 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:01:31.838871 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:01:31.838876 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.838882 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:01:31.838888 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:01:31.838893 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.838899 | orchestrator | 2025-08-29 15:01:31.838905 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 15:01:31.838912 | orchestrator | Friday 29 August 2025 15:00:00 +0000 (0:00:01.201) 0:10:10.848 ********* 2025-08-29 15:01:31.838918 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.838925 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.838931 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.838937 | orchestrator | 2025-08-29 15:01:31.838943 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 15:01:31.838947 | orchestrator | Friday 29 August 2025 15:00:03 +0000 (0:00:03.431) 0:10:14.280 ********* 2025-08-29 15:01:31.838951 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.838954 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.838958 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.838962 | orchestrator | 2025-08-29 15:01:31.838965 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 15:01:31.838969 | orchestrator | Friday 29 August 2025 15:00:04 +0000 (0:00:00.778) 0:10:15.058 ********* 2025-08-29 15:01:31.838973 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.838977 | orchestrator | 2025-08-29 15:01:31.838980 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 15:01:31.838984 | orchestrator | Friday 29 August 2025 15:00:05 +0000 (0:00:00.752) 0:10:15.810 ********* 2025-08-29 15:01:31.838988 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.838992 | orchestrator | 2025-08-29 15:01:31.838995 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 15:01:31.838999 | orchestrator | Friday 29 August 2025 15:00:05 +0000 (0:00:00.648) 0:10:16.459 ********* 2025-08-29 15:01:31.839003 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839006 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839010 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839014 | orchestrator | 2025-08-29 15:01:31.839018 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 15:01:31.839021 | orchestrator | Friday 29 August 2025 15:00:07 +0000 (0:00:01.733) 0:10:18.193 ********* 2025-08-29 15:01:31.839025 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839029 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839032 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839036 | orchestrator | 2025-08-29 15:01:31.839040 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 15:01:31.839044 | orchestrator | Friday 29 August 2025 15:00:08 +0000 (0:00:01.224) 0:10:19.418 ********* 2025-08-29 15:01:31.839047 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839051 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839055 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839058 | orchestrator | 2025-08-29 15:01:31.839062 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 15:01:31.839066 | orchestrator | Friday 29 August 2025 15:00:10 +0000 (0:00:01.811) 0:10:21.229 ********* 2025-08-29 15:01:31.839070 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839077 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839081 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839084 | orchestrator | 2025-08-29 15:01:31.839088 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 15:01:31.839092 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:02.760) 0:10:23.989 ********* 2025-08-29 15:01:31.839096 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839103 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839107 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839110 | orchestrator | 2025-08-29 15:01:31.839114 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:01:31.839118 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:01.623) 0:10:25.613 ********* 2025-08-29 15:01:31.839122 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839125 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839129 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839133 | orchestrator | 2025-08-29 15:01:31.839137 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:01:31.839143 | orchestrator | Friday 29 August 2025 15:00:15 +0000 (0:00:00.660) 0:10:26.273 ********* 2025-08-29 15:01:31.839147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.839151 | orchestrator | 2025-08-29 15:01:31.839155 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:01:31.839158 | orchestrator | Friday 29 August 2025 15:00:16 +0000 (0:00:00.821) 0:10:27.095 ********* 2025-08-29 15:01:31.839162 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839166 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839170 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839173 | orchestrator | 2025-08-29 15:01:31.839177 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:01:31.839181 | orchestrator | Friday 29 August 2025 15:00:16 +0000 (0:00:00.352) 0:10:27.448 ********* 2025-08-29 15:01:31.839185 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839188 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839192 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839196 | orchestrator | 2025-08-29 15:01:31.839200 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:01:31.839203 | orchestrator | Friday 29 August 2025 15:00:17 +0000 (0:00:01.158) 0:10:28.607 ********* 2025-08-29 15:01:31.839207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.839211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.839214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.839218 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839222 | orchestrator | 2025-08-29 15:01:31.839226 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:01:31.839229 | orchestrator | Friday 29 August 2025 15:00:19 +0000 (0:00:01.252) 0:10:29.859 ********* 2025-08-29 15:01:31.839233 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839237 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839241 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839245 | orchestrator | 2025-08-29 15:01:31.839248 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:01:31.839252 | orchestrator | 2025-08-29 15:01:31.839256 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:01:31.839260 | orchestrator | Friday 29 August 2025 15:00:19 +0000 (0:00:00.598) 0:10:30.457 ********* 2025-08-29 15:01:31.839263 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.839267 | orchestrator | 2025-08-29 15:01:31.839271 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:01:31.839275 | orchestrator | Friday 29 August 2025 15:00:20 +0000 (0:00:00.817) 0:10:31.275 ********* 2025-08-29 15:01:31.839282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.839285 | orchestrator | 2025-08-29 15:01:31.839289 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:01:31.839293 | orchestrator | Friday 29 August 2025 15:00:21 +0000 (0:00:00.546) 0:10:31.822 ********* 2025-08-29 15:01:31.839297 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839300 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839304 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839308 | orchestrator | 2025-08-29 15:01:31.839312 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:01:31.839315 | orchestrator | Friday 29 August 2025 15:00:21 +0000 (0:00:00.373) 0:10:32.195 ********* 2025-08-29 15:01:31.839319 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839323 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839327 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839330 | orchestrator | 2025-08-29 15:01:31.839334 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:01:31.839338 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.974) 0:10:33.170 ********* 2025-08-29 15:01:31.839342 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839345 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839349 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839353 | orchestrator | 2025-08-29 15:01:31.839357 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:01:31.839360 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.799) 0:10:33.970 ********* 2025-08-29 15:01:31.839364 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839368 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839371 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839375 | orchestrator | 2025-08-29 15:01:31.839379 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:01:31.839383 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.732) 0:10:34.702 ********* 2025-08-29 15:01:31.839386 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839390 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839394 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839398 | orchestrator | 2025-08-29 15:01:31.839401 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:01:31.839405 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.300) 0:10:35.003 ********* 2025-08-29 15:01:31.839409 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839413 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839416 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839420 | orchestrator | 2025-08-29 15:01:31.839426 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:01:31.839430 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.564) 0:10:35.568 ********* 2025-08-29 15:01:31.839434 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839438 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839441 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839445 | orchestrator | 2025-08-29 15:01:31.839449 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:01:31.839453 | orchestrator | Friday 29 August 2025 15:00:25 +0000 (0:00:00.359) 0:10:35.928 ********* 2025-08-29 15:01:31.839456 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839463 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839467 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839470 | orchestrator | 2025-08-29 15:01:31.839474 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:01:31.839478 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:00.837) 0:10:36.765 ********* 2025-08-29 15:01:31.839482 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839489 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839493 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839497 | orchestrator | 2025-08-29 15:01:31.839500 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:01:31.839504 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:00.745) 0:10:37.511 ********* 2025-08-29 15:01:31.839508 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839512 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839515 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839519 | orchestrator | 2025-08-29 15:01:31.839523 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:01:31.839527 | orchestrator | Friday 29 August 2025 15:00:27 +0000 (0:00:00.553) 0:10:38.064 ********* 2025-08-29 15:01:31.839530 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839534 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839538 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839542 | orchestrator | 2025-08-29 15:01:31.839545 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:01:31.839549 | orchestrator | Friday 29 August 2025 15:00:27 +0000 (0:00:00.305) 0:10:38.370 ********* 2025-08-29 15:01:31.839553 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839557 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839560 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839564 | orchestrator | 2025-08-29 15:01:31.839568 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:01:31.839572 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.343) 0:10:38.713 ********* 2025-08-29 15:01:31.839576 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839579 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839583 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839587 | orchestrator | 2025-08-29 15:01:31.839591 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:01:31.839594 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.337) 0:10:39.050 ********* 2025-08-29 15:01:31.839598 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839602 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839605 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839610 | orchestrator | 2025-08-29 15:01:31.839616 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:01:31.839623 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.573) 0:10:39.624 ********* 2025-08-29 15:01:31.839628 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839639 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839645 | orchestrator | 2025-08-29 15:01:31.839650 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:01:31.839656 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.317) 0:10:39.942 ********* 2025-08-29 15:01:31.839662 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839668 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839674 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839680 | orchestrator | 2025-08-29 15:01:31.839686 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:01:31.839692 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.296) 0:10:40.238 ********* 2025-08-29 15:01:31.839698 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839704 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839710 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839717 | orchestrator | 2025-08-29 15:01:31.839721 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:01:31.839725 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.329) 0:10:40.567 ********* 2025-08-29 15:01:31.839729 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839732 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839736 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839745 | orchestrator | 2025-08-29 15:01:31.839749 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:01:31.839774 | orchestrator | Friday 29 August 2025 15:00:30 +0000 (0:00:00.582) 0:10:41.150 ********* 2025-08-29 15:01:31.839779 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.839783 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.839787 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.839790 | orchestrator | 2025-08-29 15:01:31.839794 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 15:01:31.839798 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.589) 0:10:41.740 ********* 2025-08-29 15:01:31.839802 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.839806 | orchestrator | 2025-08-29 15:01:31.839809 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:01:31.839813 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.765) 0:10:42.505 ********* 2025-08-29 15:01:31.839817 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.839821 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:01:31.839827 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:01:31.839833 | orchestrator | 2025-08-29 15:01:31.839844 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:01:31.839849 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:02.160) 0:10:44.666 ********* 2025-08-29 15:01:31.839855 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:01:31.839861 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:01:31.839867 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.839873 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:01:31.839879 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:01:31.839885 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.839896 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:01:31.839902 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:01:31.839909 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.839915 | orchestrator | 2025-08-29 15:01:31.839921 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 15:01:31.839927 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:01.233) 0:10:45.899 ********* 2025-08-29 15:01:31.839931 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.839935 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.839939 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.839943 | orchestrator | 2025-08-29 15:01:31.839946 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 15:01:31.839950 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.436) 0:10:46.335 ********* 2025-08-29 15:01:31.839954 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.839958 | orchestrator | 2025-08-29 15:01:31.839962 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 15:01:31.839965 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:00.892) 0:10:47.227 ********* 2025-08-29 15:01:31.839969 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.839973 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.839977 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.839980 | orchestrator | 2025-08-29 15:01:31.839984 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 15:01:31.839992 | orchestrator | Friday 29 August 2025 15:00:37 +0000 (0:00:00.869) 0:10:48.097 ********* 2025-08-29 15:01:31.839996 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.840000 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:01:31.840004 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.840007 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:01:31.840011 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.840015 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:01:31.840019 | orchestrator | 2025-08-29 15:01:31.840022 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:01:31.840026 | orchestrator | Friday 29 August 2025 15:00:41 +0000 (0:00:04.272) 0:10:52.369 ********* 2025-08-29 15:01:31.840030 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.840034 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:01:31.840037 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.840041 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:01:31.840045 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:01:31.840048 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:01:31.840052 | orchestrator | 2025-08-29 15:01:31.840056 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:01:31.840060 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:02.385) 0:10:54.755 ********* 2025-08-29 15:01:31.840063 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:01:31.840067 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.840071 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:01:31.840074 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.840078 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:01:31.840082 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.840085 | orchestrator | 2025-08-29 15:01:31.840089 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 15:01:31.840093 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:01.896) 0:10:56.651 ********* 2025-08-29 15:01:31.840097 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 15:01:31.840100 | orchestrator | 2025-08-29 15:01:31.840104 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 15:01:31.840111 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.276) 0:10:56.927 ********* 2025-08-29 15:01:31.840115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840137 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840144 | orchestrator | 2025-08-29 15:01:31.840148 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 15:01:31.840152 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.623) 0:10:57.551 ********* 2025-08-29 15:01:31.840156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:01:31.840175 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840179 | orchestrator | 2025-08-29 15:01:31.840182 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 15:01:31.840186 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:00.604) 0:10:58.155 ********* 2025-08-29 15:01:31.840190 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:01:31.840194 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:01:31.840198 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:01:31.840201 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:01:31.840205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:01:31.840209 | orchestrator | 2025-08-29 15:01:31.840213 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 15:01:31.840217 | orchestrator | Friday 29 August 2025 15:01:18 +0000 (0:00:31.338) 0:11:29.494 ********* 2025-08-29 15:01:31.840220 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840224 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.840228 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.840232 | orchestrator | 2025-08-29 15:01:31.840236 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 15:01:31.840240 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:00.328) 0:11:29.822 ********* 2025-08-29 15:01:31.840243 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840247 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.840251 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.840255 | orchestrator | 2025-08-29 15:01:31.840259 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 15:01:31.840262 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:00.318) 0:11:30.141 ********* 2025-08-29 15:01:31.840266 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.840270 | orchestrator | 2025-08-29 15:01:31.840274 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 15:01:31.840277 | orchestrator | Friday 29 August 2025 15:01:20 +0000 (0:00:00.824) 0:11:30.966 ********* 2025-08-29 15:01:31.840281 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.840290 | orchestrator | 2025-08-29 15:01:31.840294 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 15:01:31.840297 | orchestrator | Friday 29 August 2025 15:01:20 +0000 (0:00:00.600) 0:11:31.566 ********* 2025-08-29 15:01:31.840301 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.840305 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.840309 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.840312 | orchestrator | 2025-08-29 15:01:31.840319 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 15:01:31.840322 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:01.568) 0:11:33.135 ********* 2025-08-29 15:01:31.840326 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.840330 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.840334 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.840338 | orchestrator | 2025-08-29 15:01:31.840342 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 15:01:31.840346 | orchestrator | Friday 29 August 2025 15:01:23 +0000 (0:00:01.244) 0:11:34.379 ********* 2025-08-29 15:01:31.840349 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:01:31.840353 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:01:31.840357 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:01:31.840361 | orchestrator | 2025-08-29 15:01:31.840365 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 15:01:31.840369 | orchestrator | Friday 29 August 2025 15:01:25 +0000 (0:00:01.787) 0:11:36.166 ********* 2025-08-29 15:01:31.840423 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.840440 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.840444 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:01:31.840448 | orchestrator | 2025-08-29 15:01:31.840452 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:01:31.840456 | orchestrator | Friday 29 August 2025 15:01:28 +0000 (0:00:02.729) 0:11:38.896 ********* 2025-08-29 15:01:31.840460 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.840467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.840471 | orchestrator | 2025-08-29 15:01:31.840475 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:01:31.840479 | orchestrator | Friday 29 August 2025 15:01:28 +0000 (0:00:00.334) 0:11:39.231 ********* 2025-08-29 15:01:31.840482 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:01:31.840486 | orchestrator | 2025-08-29 15:01:31.840490 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:01:31.840494 | orchestrator | Friday 29 August 2025 15:01:29 +0000 (0:00:00.774) 0:11:40.005 ********* 2025-08-29 15:01:31.840497 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.840501 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.840507 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.840513 | orchestrator | 2025-08-29 15:01:31.840518 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:01:31.840523 | orchestrator | Friday 29 August 2025 15:01:29 +0000 (0:00:00.338) 0:11:40.344 ********* 2025-08-29 15:01:31.840529 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840535 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:01:31.840541 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:01:31.840547 | orchestrator | 2025-08-29 15:01:31.840553 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:01:31.840560 | orchestrator | Friday 29 August 2025 15:01:29 +0000 (0:00:00.324) 0:11:40.668 ********* 2025-08-29 15:01:31.840565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:01:31.840577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:01:31.840583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:01:31.840589 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:01:31.840596 | orchestrator | 2025-08-29 15:01:31.840600 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:01:31.840604 | orchestrator | Friday 29 August 2025 15:01:30 +0000 (0:00:00.894) 0:11:41.563 ********* 2025-08-29 15:01:31.840608 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:01:31.840612 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:01:31.840616 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:01:31.840619 | orchestrator | 2025-08-29 15:01:31.840623 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:01:31.840627 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-08-29 15:01:31.840631 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 15:01:31.840635 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 15:01:31.840639 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-08-29 15:01:31.840643 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 15:01:31.840647 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 15:01:31.840650 | orchestrator | 2025-08-29 15:01:31.840654 | orchestrator | 2025-08-29 15:01:31.840658 | orchestrator | 2025-08-29 15:01:31.840662 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:01:31.840665 | orchestrator | Friday 29 August 2025 15:01:31 +0000 (0:00:00.307) 0:11:41.871 ********* 2025-08-29 15:01:31.840674 | orchestrator | =============================================================================== 2025-08-29 15:01:31.840677 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 71.76s 2025-08-29 15:01:31.840681 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.05s 2025-08-29 15:01:31.840685 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.34s 2025-08-29 15:01:31.840689 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.58s 2025-08-29 15:01:31.840692 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.06s 2025-08-29 15:01:31.840699 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.97s 2025-08-29 15:01:31.840703 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.36s 2025-08-29 15:01:31.840707 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.89s 2025-08-29 15:01:31.840711 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.93s 2025-08-29 15:01:31.840714 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.45s 2025-08-29 15:01:31.840718 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.71s 2025-08-29 15:01:31.840722 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.38s 2025-08-29 15:01:31.840725 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.94s 2025-08-29 15:01:31.840729 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.62s 2025-08-29 15:01:31.840733 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.61s 2025-08-29 15:01:31.840740 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.50s 2025-08-29 15:01:31.840743 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.27s 2025-08-29 15:01:31.840747 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.85s 2025-08-29 15:01:31.840751 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.76s 2025-08-29 15:01:31.840798 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.46s 2025-08-29 15:01:31.840802 | orchestrator | 2025-08-29 15:01:31 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:31.840806 | orchestrator | 2025-08-29 15:01:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:34.876120 | orchestrator | 2025-08-29 15:01:34 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:34.878056 | orchestrator | 2025-08-29 15:01:34 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:34.880253 | orchestrator | 2025-08-29 15:01:34 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:34.880284 | orchestrator | 2025-08-29 15:01:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:37.928620 | orchestrator | 2025-08-29 15:01:37 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:37.928921 | orchestrator | 2025-08-29 15:01:37 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:37.930991 | orchestrator | 2025-08-29 15:01:37 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:37.931049 | orchestrator | 2025-08-29 15:01:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:40.990673 | orchestrator | 2025-08-29 15:01:40 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:40.992667 | orchestrator | 2025-08-29 15:01:40 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:40.994186 | orchestrator | 2025-08-29 15:01:40 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:40.994465 | orchestrator | 2025-08-29 15:01:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:44.053889 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:44.056997 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:44.060087 | orchestrator | 2025-08-29 15:01:44 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:44.060690 | orchestrator | 2025-08-29 15:01:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:47.108479 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:47.110464 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:47.110507 | orchestrator | 2025-08-29 15:01:47 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:47.110519 | orchestrator | 2025-08-29 15:01:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:50.156198 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:50.156282 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:50.158275 | orchestrator | 2025-08-29 15:01:50 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:50.158358 | orchestrator | 2025-08-29 15:01:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:53.202569 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:53.205656 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:53.209658 | orchestrator | 2025-08-29 15:01:53 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:53.209764 | orchestrator | 2025-08-29 15:01:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:56.251936 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:56.254810 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:56.261189 | orchestrator | 2025-08-29 15:01:56 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:56.263231 | orchestrator | 2025-08-29 15:01:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:59.304975 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:01:59.306420 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:01:59.308378 | orchestrator | 2025-08-29 15:01:59 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:01:59.308527 | orchestrator | 2025-08-29 15:01:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:02.355136 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:02.356673 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:02.358351 | orchestrator | 2025-08-29 15:02:02 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:02.358442 | orchestrator | 2025-08-29 15:02:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:05.413321 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:05.416500 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:05.418251 | orchestrator | 2025-08-29 15:02:05 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:05.419568 | orchestrator | 2025-08-29 15:02:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:08.468619 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:08.469928 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:08.471651 | orchestrator | 2025-08-29 15:02:08 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:08.471737 | orchestrator | 2025-08-29 15:02:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:11.513376 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:11.514910 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:11.516900 | orchestrator | 2025-08-29 15:02:11 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:11.516948 | orchestrator | 2025-08-29 15:02:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:14.565038 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:14.572430 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:14.576582 | orchestrator | 2025-08-29 15:02:14 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:14.576673 | orchestrator | 2025-08-29 15:02:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:17.630232 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:17.630334 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:17.632646 | orchestrator | 2025-08-29 15:02:17 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:17.632750 | orchestrator | 2025-08-29 15:02:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:20.682819 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:20.684516 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:20.686337 | orchestrator | 2025-08-29 15:02:20 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:20.686402 | orchestrator | 2025-08-29 15:02:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:23.721319 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:23.721879 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:23.724049 | orchestrator | 2025-08-29 15:02:23 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state STARTED 2025-08-29 15:02:23.724093 | orchestrator | 2025-08-29 15:02:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:26.774092 | orchestrator | 2025-08-29 15:02:26 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:26.775262 | orchestrator | 2025-08-29 15:02:26 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:26.777911 | orchestrator | 2025-08-29 15:02:26 | INFO  | Task 41a275fe-0e3d-4113-b6b1-57f88c74e974 is in state SUCCESS 2025-08-29 15:02:26.777957 | orchestrator | 2025-08-29 15:02:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:26.779122 | orchestrator | 2025-08-29 15:02:26.779149 | orchestrator | 2025-08-29 15:02:26.779154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:02:26.779159 | orchestrator | 2025-08-29 15:02:26.779164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:02:26.779169 | orchestrator | Friday 29 August 2025 14:59:24 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-08-29 15:02:26.779174 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:26.779179 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:26.779184 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:26.779188 | orchestrator | 2025-08-29 15:02:26.779193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:02:26.779197 | orchestrator | Friday 29 August 2025 14:59:25 +0000 (0:00:00.335) 0:00:00.609 ********* 2025-08-29 15:02:26.779204 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 15:02:26.779209 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 15:02:26.779213 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 15:02:26.779218 | orchestrator | 2025-08-29 15:02:26.779241 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 15:02:26.779245 | orchestrator | 2025-08-29 15:02:26.779249 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:02:26.779254 | orchestrator | Friday 29 August 2025 14:59:25 +0000 (0:00:00.418) 0:00:01.028 ********* 2025-08-29 15:02:26.779259 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:02:26.779263 | orchestrator | 2025-08-29 15:02:26.779267 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 15:02:26.779272 | orchestrator | Friday 29 August 2025 14:59:26 +0000 (0:00:00.524) 0:00:01.552 ********* 2025-08-29 15:02:26.779276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:02:26.779280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:02:26.779285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:02:26.779289 | orchestrator | 2025-08-29 15:02:26.779293 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 15:02:26.779298 | orchestrator | Friday 29 August 2025 14:59:26 +0000 (0:00:00.801) 0:00:02.354 ********* 2025-08-29 15:02:26.779304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779457 | orchestrator | 2025-08-29 15:02:26.779461 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:02:26.779466 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:01.689) 0:00:04.043 ********* 2025-08-29 15:02:26.779470 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:02:26.779474 | orchestrator | 2025-08-29 15:02:26.779479 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 15:02:26.779483 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.587) 0:00:04.631 ********* 2025-08-29 15:02:26.779494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779541 | orchestrator | 2025-08-29 15:02:26.779558 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 15:02:26.779562 | orchestrator | Friday 29 August 2025 14:59:31 +0000 (0:00:02.734) 0:00:07.365 ********* 2025-08-29 15:02:26.779573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:02:26.779582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:02:26.779587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:26.779596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:02:26.779607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:02:26.779611 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:26.779616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:02:26.779623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:02:26.779628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:26.779633 | orchestrator | 2025-08-29 15:02:26.779637 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 15:02:26.779641 | orchestrator | Friday 29 August 2025 14:59:33 +0000 (0:00:01.500) 0:00:08.865 ********* 2025-08-29 15:02:26.779655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:02:26.779660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:02:26.779665 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:26.779669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:02:26.779677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:02:26.779689 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:26.779726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:02:26.779732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:02:26.779736 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:26.779741 | orchestrator | 2025-08-29 15:02:26.779745 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 15:02:26.779750 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:01.017) 0:00:09.883 ********* 2025-08-29 15:02:26.779754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779807 | orchestrator | 2025-08-29 15:02:26.779811 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 15:02:26.779815 | orchestrator | Friday 29 August 2025 14:59:37 +0000 (0:00:02.829) 0:00:12.712 ********* 2025-08-29 15:02:26.779820 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:26.779824 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:26.779829 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:26.779833 | orchestrator | 2025-08-29 15:02:26.779837 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 15:02:26.779842 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:03.819) 0:00:16.531 ********* 2025-08-29 15:02:26.779846 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:26.779850 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:26.779855 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:26.779859 | orchestrator | 2025-08-29 15:02:26.779863 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 15:02:26.779868 | orchestrator | Friday 29 August 2025 14:59:42 +0000 (0:00:01.647) 0:00:18.179 ********* 2025-08-29 15:02:26.779878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:02:26.779896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:02:26.779919 | orchestrator | 2025-08-29 15:02:26.779923 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:02:26.779928 | orchestrator | Friday 29 August 2025 14:59:44 +0000 (0:00:02.031) 0:00:20.210 ********* 2025-08-29 15:02:26.779932 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:26.779937 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:26.779941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:26.779947 | orchestrator | 2025-08-29 15:02:26.779954 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:02:26.779961 | orchestrator | Friday 29 August 2025 14:59:45 +0000 (0:00:00.315) 0:00:20.526 ********* 2025-08-29 15:02:26.779968 | orchestrator | 2025-08-29 15:02:26.779974 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:02:26.779981 | orchestrator | Friday 29 August 2025 14:59:45 +0000 (0:00:00.065) 0:00:20.592 ********* 2025-08-29 15:02:26.779988 | orchestrator | 2025-08-29 15:02:26.779995 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:02:26.780008 | orchestrator | Friday 29 August 2025 14:59:45 +0000 (0:00:00.081) 0:00:20.674 ********* 2025-08-29 15:02:26.780015 | orchestrator | 2025-08-29 15:02:26.780022 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 15:02:26.780029 | orchestrator | Friday 29 August 2025 14:59:45 +0000 (0:00:00.297) 0:00:20.971 ********* 2025-08-29 15:02:26.780036 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:26.780042 | orchestrator | 2025-08-29 15:02:26.780049 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 15:02:26.780057 | orchestrator | Friday 29 August 2025 14:59:45 +0000 (0:00:00.215) 0:00:21.187 ********* 2025-08-29 15:02:26.780063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:26.780069 | orchestrator | 2025-08-29 15:02:26.780076 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 15:02:26.780083 | orchestrator | Friday 29 August 2025 14:59:45 +0000 (0:00:00.197) 0:00:21.384 ********* 2025-08-29 15:02:26.780089 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:26.780097 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:26.780109 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:26.780118 | orchestrator | 2025-08-29 15:02:26.780123 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 15:02:26.780127 | orchestrator | Friday 29 August 2025 15:00:54 +0000 (0:01:08.241) 0:01:29.626 ********* 2025-08-29 15:02:26.780131 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:26.780136 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:26.780140 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:26.780144 | orchestrator | 2025-08-29 15:02:26.780148 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:02:26.780155 | orchestrator | Friday 29 August 2025 15:02:14 +0000 (0:01:20.306) 0:02:49.933 ********* 2025-08-29 15:02:26.780162 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:02:26.780170 | orchestrator | 2025-08-29 15:02:26.780177 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 15:02:26.780184 | orchestrator | Friday 29 August 2025 15:02:15 +0000 (0:00:00.698) 0:02:50.631 ********* 2025-08-29 15:02:26.780192 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:26.780199 | orchestrator | 2025-08-29 15:02:26.780203 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 15:02:26.780207 | orchestrator | Friday 29 August 2025 15:02:17 +0000 (0:00:02.348) 0:02:52.979 ********* 2025-08-29 15:02:26.780212 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:26.780216 | orchestrator | 2025-08-29 15:02:26.780220 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 15:02:26.780225 | orchestrator | Friday 29 August 2025 15:02:19 +0000 (0:00:02.237) 0:02:55.216 ********* 2025-08-29 15:02:26.780229 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:26.780234 | orchestrator | 2025-08-29 15:02:26.780241 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 15:02:26.780248 | orchestrator | Friday 29 August 2025 15:02:22 +0000 (0:00:02.716) 0:02:57.932 ********* 2025-08-29 15:02:26.780256 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:26.780261 | orchestrator | 2025-08-29 15:02:26.780268 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:02:26.780274 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:02:26.780280 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:02:26.780284 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:02:26.780295 | orchestrator | 2025-08-29 15:02:26.780299 | orchestrator | 2025-08-29 15:02:26.780303 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:02:26.780307 | orchestrator | Friday 29 August 2025 15:02:24 +0000 (0:00:02.378) 0:03:00.311 ********* 2025-08-29 15:02:26.780312 | orchestrator | =============================================================================== 2025-08-29 15:02:26.780316 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.31s 2025-08-29 15:02:26.780320 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.24s 2025-08-29 15:02:26.780325 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.82s 2025-08-29 15:02:26.780329 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.83s 2025-08-29 15:02:26.780333 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.73s 2025-08-29 15:02:26.780337 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.72s 2025-08-29 15:02:26.780342 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.38s 2025-08-29 15:02:26.780346 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.35s 2025-08-29 15:02:26.780350 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.24s 2025-08-29 15:02:26.780355 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.03s 2025-08-29 15:02:26.780359 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.69s 2025-08-29 15:02:26.780363 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.65s 2025-08-29 15:02:26.780368 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.50s 2025-08-29 15:02:26.780372 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.02s 2025-08-29 15:02:26.780376 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.80s 2025-08-29 15:02:26.780380 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-08-29 15:02:26.780385 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2025-08-29 15:02:26.780389 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-08-29 15:02:26.780393 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.44s 2025-08-29 15:02:26.780398 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-08-29 15:02:29.827947 | orchestrator | 2025-08-29 15:02:29 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:29.831732 | orchestrator | 2025-08-29 15:02:29 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:29.831833 | orchestrator | 2025-08-29 15:02:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:32.880906 | orchestrator | 2025-08-29 15:02:32 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:32.883505 | orchestrator | 2025-08-29 15:02:32 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:32.883573 | orchestrator | 2025-08-29 15:02:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:35.926447 | orchestrator | 2025-08-29 15:02:35 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:35.926548 | orchestrator | 2025-08-29 15:02:35 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:35.926565 | orchestrator | 2025-08-29 15:02:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:38.984732 | orchestrator | 2025-08-29 15:02:38 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:38.985794 | orchestrator | 2025-08-29 15:02:38 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:38.986158 | orchestrator | 2025-08-29 15:02:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:42.036737 | orchestrator | 2025-08-29 15:02:42 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:42.038454 | orchestrator | 2025-08-29 15:02:42 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:42.038499 | orchestrator | 2025-08-29 15:02:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:45.086505 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:45.087080 | orchestrator | 2025-08-29 15:02:45 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:45.087430 | orchestrator | 2025-08-29 15:02:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:48.134347 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state STARTED 2025-08-29 15:02:48.136069 | orchestrator | 2025-08-29 15:02:48 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:48.136123 | orchestrator | 2025-08-29 15:02:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:51.183153 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:02:51.183431 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task e98c4218-e1ac-4294-84a6-eca2ca1641b8 is in state SUCCESS 2025-08-29 15:02:51.185418 | orchestrator | 2025-08-29 15:02:51.185493 | orchestrator | 2025-08-29 15:02:51.185517 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 15:02:51.185966 | orchestrator | 2025-08-29 15:02:51.186001 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 15:02:51.186102 | orchestrator | Friday 29 August 2025 14:59:24 +0000 (0:00:00.115) 0:00:00.115 ********* 2025-08-29 15:02:51.186130 | orchestrator | ok: [localhost] => { 2025-08-29 15:02:51.186151 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 15:02:51.186170 | orchestrator | } 2025-08-29 15:02:51.186191 | orchestrator | 2025-08-29 15:02:51.186212 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 15:02:51.186231 | orchestrator | Friday 29 August 2025 14:59:24 +0000 (0:00:00.059) 0:00:00.175 ********* 2025-08-29 15:02:51.186251 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 15:02:51.186271 | orchestrator | ...ignoring 2025-08-29 15:02:51.186290 | orchestrator | 2025-08-29 15:02:51.186308 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 15:02:51.186327 | orchestrator | Friday 29 August 2025 14:59:27 +0000 (0:00:02.869) 0:00:03.044 ********* 2025-08-29 15:02:51.186347 | orchestrator | skipping: [localhost] 2025-08-29 15:02:51.186368 | orchestrator | 2025-08-29 15:02:51.186389 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 15:02:51.186410 | orchestrator | Friday 29 August 2025 14:59:27 +0000 (0:00:00.056) 0:00:03.100 ********* 2025-08-29 15:02:51.186432 | orchestrator | ok: [localhost] 2025-08-29 15:02:51.186453 | orchestrator | 2025-08-29 15:02:51.186474 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:02:51.186494 | orchestrator | 2025-08-29 15:02:51.186515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:02:51.186536 | orchestrator | Friday 29 August 2025 14:59:27 +0000 (0:00:00.168) 0:00:03.269 ********* 2025-08-29 15:02:51.186558 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.186585 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.186610 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.186701 | orchestrator | 2025-08-29 15:02:51.186726 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:02:51.186749 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:00.329) 0:00:03.598 ********* 2025-08-29 15:02:51.186790 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 15:02:51.186827 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 15:02:51.186884 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 15:02:51.186903 | orchestrator | 2025-08-29 15:02:51.186921 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 15:02:51.186939 | orchestrator | 2025-08-29 15:02:51.186958 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 15:02:51.186976 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:00.739) 0:00:04.338 ********* 2025-08-29 15:02:51.186995 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:02:51.187015 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:02:51.187033 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:02:51.187052 | orchestrator | 2025-08-29 15:02:51.187071 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:02:51.187088 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.398) 0:00:04.736 ********* 2025-08-29 15:02:51.187108 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:02:51.187130 | orchestrator | 2025-08-29 15:02:51.187148 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 15:02:51.187167 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.628) 0:00:05.365 ********* 2025-08-29 15:02:51.187218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.187254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.187292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.187312 | orchestrator | 2025-08-29 15:02:51.187345 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 15:02:51.187365 | orchestrator | Friday 29 August 2025 14:59:33 +0000 (0:00:03.406) 0:00:08.772 ********* 2025-08-29 15:02:51.187384 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.187404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.187421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.187441 | orchestrator | 2025-08-29 15:02:51.187462 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 15:02:51.187482 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:00.926) 0:00:09.699 ********* 2025-08-29 15:02:51.187500 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.187519 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.187549 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.187569 | orchestrator | 2025-08-29 15:02:51.187588 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 15:02:51.187606 | orchestrator | Friday 29 August 2025 14:59:36 +0000 (0:00:01.804) 0:00:11.503 ********* 2025-08-29 15:02:51.187634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.187694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.187726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.187758 | orchestrator | 2025-08-29 15:02:51.187778 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 15:02:51.187798 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:05.137) 0:00:16.640 ********* 2025-08-29 15:02:51.187816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.187835 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.187853 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.187872 | orchestrator | 2025-08-29 15:02:51.187890 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 15:02:51.187911 | orchestrator | Friday 29 August 2025 14:59:42 +0000 (0:00:01.070) 0:00:17.710 ********* 2025-08-29 15:02:51.187928 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.187947 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:51.187965 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:51.187983 | orchestrator | 2025-08-29 15:02:51.188002 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:02:51.188020 | orchestrator | Friday 29 August 2025 14:59:46 +0000 (0:00:03.987) 0:00:21.698 ********* 2025-08-29 15:02:51.188039 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:02:51.188059 | orchestrator | 2025-08-29 15:02:51.188077 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 15:02:51.188095 | orchestrator | Friday 29 August 2025 14:59:46 +0000 (0:00:00.613) 0:00:22.311 ********* 2025-08-29 15:02:51.188129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188163 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.188191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188211 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.188243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188276 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.188296 | orchestrator | 2025-08-29 15:02:51.188314 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 15:02:51.188333 | orchestrator | Friday 29 August 2025 14:59:49 +0000 (0:00:02.922) 0:00:25.234 ********* 2025-08-29 15:02:51.188359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.188414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.188470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188492 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.188512 | orchestrator | 2025-08-29 15:02:51.188531 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 15:02:51.188549 | orchestrator | Friday 29 August 2025 14:59:52 +0000 (0:00:03.171) 0:00:28.405 ********* 2025-08-29 15:02:51.188579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188619 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.188650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188699 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.188722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:02:51.188755 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.188774 | orchestrator | 2025-08-29 15:02:51.188794 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 15:02:51.188810 | orchestrator | Friday 29 August 2025 14:59:55 +0000 (0:00:02.700) 0:00:31.105 ********* 2025-08-29 15:02:51.188851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.188876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.188923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:02:51.188944 | orchestrator | 2025-08-29 15:02:51.188963 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 15:02:51.188983 | orchestrator | Friday 29 August 2025 14:59:58 +0000 (0:00:03.101) 0:00:34.206 ********* 2025-08-29 15:02:51.189003 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.189023 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:51.189043 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:51.189062 | orchestrator | 2025-08-29 15:02:51.189081 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 15:02:51.189107 | orchestrator | Friday 29 August 2025 15:00:00 +0000 (0:00:01.287) 0:00:35.494 ********* 2025-08-29 15:02:51.189126 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.189145 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.189163 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.189181 | orchestrator | 2025-08-29 15:02:51.189198 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 15:02:51.189218 | orchestrator | Friday 29 August 2025 15:00:00 +0000 (0:00:00.332) 0:00:35.826 ********* 2025-08-29 15:02:51.189237 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.189255 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.189273 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.189291 | orchestrator | 2025-08-29 15:02:51.189310 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 15:02:51.189327 | orchestrator | Friday 29 August 2025 15:00:00 +0000 (0:00:00.332) 0:00:36.159 ********* 2025-08-29 15:02:51.189361 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 15:02:51.189381 | orchestrator | ...ignoring 2025-08-29 15:02:51.189401 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 15:02:51.189420 | orchestrator | ...ignoring 2025-08-29 15:02:51.189438 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 15:02:51.189458 | orchestrator | ...ignoring 2025-08-29 15:02:51.189478 | orchestrator | 2025-08-29 15:02:51.189497 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 15:02:51.189516 | orchestrator | Friday 29 August 2025 15:00:11 +0000 (0:00:11.004) 0:00:47.163 ********* 2025-08-29 15:02:51.189535 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.189553 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.189572 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.189592 | orchestrator | 2025-08-29 15:02:51.189611 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 15:02:51.189630 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.828) 0:00:47.992 ********* 2025-08-29 15:02:51.189649 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.189707 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.189732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.189751 | orchestrator | 2025-08-29 15:02:51.189769 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 15:02:51.189788 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.449) 0:00:48.442 ********* 2025-08-29 15:02:51.189807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.189828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.189847 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.189865 | orchestrator | 2025-08-29 15:02:51.189885 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 15:02:51.189904 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.462) 0:00:48.905 ********* 2025-08-29 15:02:51.189923 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.189941 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.189960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.189978 | orchestrator | 2025-08-29 15:02:51.189998 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 15:02:51.190080 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.437) 0:00:49.342 ********* 2025-08-29 15:02:51.190106 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.190125 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.190144 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.190161 | orchestrator | 2025-08-29 15:02:51.190180 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 15:02:51.190196 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:00.614) 0:00:49.956 ********* 2025-08-29 15:02:51.190213 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.190229 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.190247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.190265 | orchestrator | 2025-08-29 15:02:51.190283 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:02:51.190303 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:00.457) 0:00:50.413 ********* 2025-08-29 15:02:51.190323 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.190341 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.190359 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 15:02:51.190378 | orchestrator | 2025-08-29 15:02:51.190396 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 15:02:51.190416 | orchestrator | Friday 29 August 2025 15:00:15 +0000 (0:00:00.412) 0:00:50.826 ********* 2025-08-29 15:02:51.190450 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.190469 | orchestrator | 2025-08-29 15:02:51.190487 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 15:02:51.190505 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:19.632) 0:01:10.458 ********* 2025-08-29 15:02:51.190525 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.190544 | orchestrator | 2025-08-29 15:02:51.190562 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:02:51.190580 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:00.138) 0:01:10.597 ********* 2025-08-29 15:02:51.190598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.190616 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.190637 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.190656 | orchestrator | 2025-08-29 15:02:51.190703 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 15:02:51.190722 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:01.115) 0:01:11.712 ********* 2025-08-29 15:02:51.190741 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.190760 | orchestrator | 2025-08-29 15:02:51.190778 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 15:02:51.190797 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:07.840) 0:01:19.553 ********* 2025-08-29 15:02:51.190827 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.190847 | orchestrator | 2025-08-29 15:02:51.190867 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 15:02:51.190887 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:01.559) 0:01:21.113 ********* 2025-08-29 15:02:51.190906 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.190925 | orchestrator | 2025-08-29 15:02:51.190943 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 15:02:51.190961 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:02.601) 0:01:23.714 ********* 2025-08-29 15:02:51.190977 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.190996 | orchestrator | 2025-08-29 15:02:51.191016 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 15:02:51.191036 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.133) 0:01:23.847 ********* 2025-08-29 15:02:51.191055 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.191075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.191094 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.191114 | orchestrator | 2025-08-29 15:02:51.191134 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 15:02:51.191154 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.531) 0:01:24.379 ********* 2025-08-29 15:02:51.191173 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.191193 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 15:02:51.191213 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:51.191233 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:51.191253 | orchestrator | 2025-08-29 15:02:51.191272 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 15:02:51.191291 | orchestrator | skipping: no hosts matched 2025-08-29 15:02:51.191310 | orchestrator | 2025-08-29 15:02:51.191327 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:02:51.191345 | orchestrator | 2025-08-29 15:02:51.191363 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:02:51.191379 | orchestrator | Friday 29 August 2025 15:00:49 +0000 (0:00:00.359) 0:01:24.738 ********* 2025-08-29 15:02:51.191398 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:02:51.191414 | orchestrator | 2025-08-29 15:02:51.191433 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:02:51.191451 | orchestrator | Friday 29 August 2025 15:01:08 +0000 (0:00:19.650) 0:01:44.389 ********* 2025-08-29 15:02:51.191483 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.191504 | orchestrator | 2025-08-29 15:02:51.191524 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:02:51.191543 | orchestrator | Friday 29 August 2025 15:01:29 +0000 (0:00:20.585) 0:02:04.975 ********* 2025-08-29 15:02:51.191562 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.191581 | orchestrator | 2025-08-29 15:02:51.191600 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:02:51.191618 | orchestrator | 2025-08-29 15:02:51.191638 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:02:51.191657 | orchestrator | Friday 29 August 2025 15:01:32 +0000 (0:00:02.640) 0:02:07.615 ********* 2025-08-29 15:02:51.191703 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:02:51.191723 | orchestrator | 2025-08-29 15:02:51.191743 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:02:51.191777 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:25.574) 0:02:33.189 ********* 2025-08-29 15:02:51.191796 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.191815 | orchestrator | 2025-08-29 15:02:51.191833 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:02:51.191851 | orchestrator | Friday 29 August 2025 15:02:13 +0000 (0:00:15.551) 0:02:48.741 ********* 2025-08-29 15:02:51.191871 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.191890 | orchestrator | 2025-08-29 15:02:51.191908 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 15:02:51.191926 | orchestrator | 2025-08-29 15:02:51.191943 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:02:51.191962 | orchestrator | Friday 29 August 2025 15:02:16 +0000 (0:00:02.825) 0:02:51.566 ********* 2025-08-29 15:02:51.191981 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.191999 | orchestrator | 2025-08-29 15:02:51.192019 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:02:51.192038 | orchestrator | Friday 29 August 2025 15:02:32 +0000 (0:00:16.784) 0:03:08.350 ********* 2025-08-29 15:02:51.192057 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.192095 | orchestrator | 2025-08-29 15:02:51.192131 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:02:51.192150 | orchestrator | Friday 29 August 2025 15:02:33 +0000 (0:00:00.620) 0:03:08.971 ********* 2025-08-29 15:02:51.192168 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.192187 | orchestrator | 2025-08-29 15:02:51.192204 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 15:02:51.192223 | orchestrator | 2025-08-29 15:02:51.192244 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 15:02:51.192262 | orchestrator | Friday 29 August 2025 15:02:35 +0000 (0:00:02.460) 0:03:11.431 ********* 2025-08-29 15:02:51.192280 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:02:51.192300 | orchestrator | 2025-08-29 15:02:51.192317 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 15:02:51.192337 | orchestrator | Friday 29 August 2025 15:02:36 +0000 (0:00:00.529) 0:03:11.960 ********* 2025-08-29 15:02:51.192356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.192375 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.192393 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.192412 | orchestrator | 2025-08-29 15:02:51.192429 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 15:02:51.192449 | orchestrator | Friday 29 August 2025 15:02:39 +0000 (0:00:02.487) 0:03:14.448 ********* 2025-08-29 15:02:51.192469 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.192487 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.192506 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.192524 | orchestrator | 2025-08-29 15:02:51.192552 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 15:02:51.192573 | orchestrator | Friday 29 August 2025 15:02:41 +0000 (0:00:02.134) 0:03:16.583 ********* 2025-08-29 15:02:51.192608 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.192627 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.192645 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.192664 | orchestrator | 2025-08-29 15:02:51.192758 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 15:02:51.192777 | orchestrator | Friday 29 August 2025 15:02:43 +0000 (0:00:02.137) 0:03:18.720 ********* 2025-08-29 15:02:51.192797 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.192815 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.192834 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:02:51.192855 | orchestrator | 2025-08-29 15:02:51.192874 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 15:02:51.192893 | orchestrator | Friday 29 August 2025 15:02:45 +0000 (0:00:02.264) 0:03:20.985 ********* 2025-08-29 15:02:51.192912 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:02:51.192929 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:02:51.192947 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:02:51.192965 | orchestrator | 2025-08-29 15:02:51.192984 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 15:02:51.193004 | orchestrator | Friday 29 August 2025 15:02:48 +0000 (0:00:03.007) 0:03:23.992 ********* 2025-08-29 15:02:51.193022 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:02:51.193040 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:02:51.193059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:02:51.193077 | orchestrator | 2025-08-29 15:02:51.193097 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:02:51.193118 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 15:02:51.193138 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 15:02:51.193159 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:02:51.193178 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:02:51.193198 | orchestrator | 2025-08-29 15:02:51.193218 | orchestrator | 2025-08-29 15:02:51.193235 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:02:51.193251 | orchestrator | Friday 29 August 2025 15:02:48 +0000 (0:00:00.231) 0:03:24.223 ********* 2025-08-29 15:02:51.193268 | orchestrator | =============================================================================== 2025-08-29 15:02:51.193287 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.22s 2025-08-29 15:02:51.193304 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.14s 2025-08-29 15:02:51.193331 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 19.63s 2025-08-29 15:02:51.193348 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.78s 2025-08-29 15:02:51.193364 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.00s 2025-08-29 15:02:51.193382 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.84s 2025-08-29 15:02:51.193399 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.47s 2025-08-29 15:02:51.193415 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.14s 2025-08-29 15:02:51.193432 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.99s 2025-08-29 15:02:51.193448 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.41s 2025-08-29 15:02:51.193464 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.17s 2025-08-29 15:02:51.193493 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.10s 2025-08-29 15:02:51.193509 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.01s 2025-08-29 15:02:51.193527 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.92s 2025-08-29 15:02:51.193545 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2025-08-29 15:02:51.193562 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.70s 2025-08-29 15:02:51.193580 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.60s 2025-08-29 15:02:51.193597 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.49s 2025-08-29 15:02:51.193614 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.46s 2025-08-29 15:02:51.193632 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.26s 2025-08-29 15:02:51.193649 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:51.193730 | orchestrator | 2025-08-29 15:02:51 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:02:51.193751 | orchestrator | 2025-08-29 15:02:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:54.248177 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:02:54.249084 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:54.250771 | orchestrator | 2025-08-29 15:02:54 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:02:54.250808 | orchestrator | 2025-08-29 15:02:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:57.316096 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:02:57.322326 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:02:57.323770 | orchestrator | 2025-08-29 15:02:57 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:02:57.323811 | orchestrator | 2025-08-29 15:02:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:00.365441 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:00.366586 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:00.367641 | orchestrator | 2025-08-29 15:03:00 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:00.367737 | orchestrator | 2025-08-29 15:03:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:03.415919 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:03.418564 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:03.420787 | orchestrator | 2025-08-29 15:03:03 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:03.420839 | orchestrator | 2025-08-29 15:03:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:06.463956 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:06.464427 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:06.465615 | orchestrator | 2025-08-29 15:03:06 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:06.465776 | orchestrator | 2025-08-29 15:03:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:09.501436 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:09.501866 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:09.503292 | orchestrator | 2025-08-29 15:03:09 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:09.503385 | orchestrator | 2025-08-29 15:03:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:12.539133 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:12.539800 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:12.541886 | orchestrator | 2025-08-29 15:03:12 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:12.542138 | orchestrator | 2025-08-29 15:03:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:15.586386 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:15.587833 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:15.589012 | orchestrator | 2025-08-29 15:03:15 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:15.589052 | orchestrator | 2025-08-29 15:03:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:18.625338 | orchestrator | 2025-08-29 15:03:18 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:18.625819 | orchestrator | 2025-08-29 15:03:18 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:18.626574 | orchestrator | 2025-08-29 15:03:18 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:18.626609 | orchestrator | 2025-08-29 15:03:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:21.670063 | orchestrator | 2025-08-29 15:03:21 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:21.673188 | orchestrator | 2025-08-29 15:03:21 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:21.676201 | orchestrator | 2025-08-29 15:03:21 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:21.676291 | orchestrator | 2025-08-29 15:03:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:24.722358 | orchestrator | 2025-08-29 15:03:24 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:24.723554 | orchestrator | 2025-08-29 15:03:24 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:24.727571 | orchestrator | 2025-08-29 15:03:24 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:24.728463 | orchestrator | 2025-08-29 15:03:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:27.785900 | orchestrator | 2025-08-29 15:03:27 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:27.786730 | orchestrator | 2025-08-29 15:03:27 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:27.787993 | orchestrator | 2025-08-29 15:03:27 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:27.788041 | orchestrator | 2025-08-29 15:03:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:30.819268 | orchestrator | 2025-08-29 15:03:30 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:30.819389 | orchestrator | 2025-08-29 15:03:30 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:30.821043 | orchestrator | 2025-08-29 15:03:30 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:30.821068 | orchestrator | 2025-08-29 15:03:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:33.870575 | orchestrator | 2025-08-29 15:03:33 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:33.872072 | orchestrator | 2025-08-29 15:03:33 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:33.873297 | orchestrator | 2025-08-29 15:03:33 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:33.873333 | orchestrator | 2025-08-29 15:03:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:36.917183 | orchestrator | 2025-08-29 15:03:36 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:36.918437 | orchestrator | 2025-08-29 15:03:36 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:36.920412 | orchestrator | 2025-08-29 15:03:36 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:36.920465 | orchestrator | 2025-08-29 15:03:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:39.962508 | orchestrator | 2025-08-29 15:03:39 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:39.963923 | orchestrator | 2025-08-29 15:03:39 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:39.966188 | orchestrator | 2025-08-29 15:03:39 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:39.966242 | orchestrator | 2025-08-29 15:03:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:43.007903 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:43.007998 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:43.008294 | orchestrator | 2025-08-29 15:03:43 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:43.008309 | orchestrator | 2025-08-29 15:03:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:46.044347 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:46.045734 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state STARTED 2025-08-29 15:03:46.047716 | orchestrator | 2025-08-29 15:03:46 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:46.047774 | orchestrator | 2025-08-29 15:03:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:49.093956 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:49.097003 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task b76230fd-a297-414d-bcc9-bed502b0c54b is in state SUCCESS 2025-08-29 15:03:49.098745 | orchestrator | 2025-08-29 15:03:49.098793 | orchestrator | 2025-08-29 15:03:49.098805 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 15:03:49.098816 | orchestrator | 2025-08-29 15:03:49.098826 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:03:49.098862 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:00.640) 0:00:00.640 ********* 2025-08-29 15:03:49.098873 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:49.098884 | orchestrator | 2025-08-29 15:03:49.098894 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:03:49.098904 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:00.672) 0:00:01.313 ********* 2025-08-29 15:03:49.098913 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.098924 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.098934 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.098944 | orchestrator | 2025-08-29 15:03:49.098954 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:03:49.098963 | orchestrator | Friday 29 August 2025 15:01:37 +0000 (0:00:00.844) 0:00:02.157 ********* 2025-08-29 15:03:49.098973 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.098982 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.098992 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099001 | orchestrator | 2025-08-29 15:03:49.099012 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:03:49.099021 | orchestrator | Friday 29 August 2025 15:01:37 +0000 (0:00:00.317) 0:00:02.475 ********* 2025-08-29 15:03:49.099030 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.099040 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.099050 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099059 | orchestrator | 2025-08-29 15:03:49.099069 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:03:49.099078 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:00.832) 0:00:03.308 ********* 2025-08-29 15:03:49.099088 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.099097 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.099106 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099115 | orchestrator | 2025-08-29 15:03:49.099125 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:03:49.099134 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.331) 0:00:03.639 ********* 2025-08-29 15:03:49.099144 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.099153 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.099163 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099550 | orchestrator | 2025-08-29 15:03:49.099558 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:03:49.099564 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.315) 0:00:03.954 ********* 2025-08-29 15:03:49.099570 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.099576 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.099582 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099587 | orchestrator | 2025-08-29 15:03:49.099594 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:03:49.099600 | orchestrator | Friday 29 August 2025 15:01:39 +0000 (0:00:00.313) 0:00:04.267 ********* 2025-08-29 15:03:49.099624 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.099632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.099637 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.099643 | orchestrator | 2025-08-29 15:03:49.099649 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:03:49.099655 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.517) 0:00:04.785 ********* 2025-08-29 15:03:49.099661 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.099666 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.099672 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099678 | orchestrator | 2025-08-29 15:03:49.099683 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:03:49.099689 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:00.303) 0:00:05.088 ********* 2025-08-29 15:03:49.099695 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:49.099711 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:49.099717 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:49.099723 | orchestrator | 2025-08-29 15:03:49.099728 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:03:49.099734 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.690) 0:00:05.779 ********* 2025-08-29 15:03:49.099740 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.099746 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.099751 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.099757 | orchestrator | 2025-08-29 15:03:49.099796 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:03:49.099803 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:00.410) 0:00:06.189 ********* 2025-08-29 15:03:49.099809 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:49.099815 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:49.099820 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:49.099826 | orchestrator | 2025-08-29 15:03:49.099834 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:03:49.100040 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:02.139) 0:00:08.329 ********* 2025-08-29 15:03:49.100056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:03:49.100063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:03:49.100086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:03:49.100095 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100104 | orchestrator | 2025-08-29 15:03:49.100113 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:03:49.100158 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.542) 0:00:08.871 ********* 2025-08-29 15:03:49.100171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.100184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.100194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.100203 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100212 | orchestrator | 2025-08-29 15:03:49.100222 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:03:49.100230 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.806) 0:00:09.677 ********* 2025-08-29 15:03:49.100242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.100254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.100273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.100283 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100291 | orchestrator | 2025-08-29 15:03:49.100301 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:03:49.100307 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.189) 0:00:09.867 ********* 2025-08-29 15:03:49.100314 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '070bd6071033', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 15:01:42.404427', 'end': '2025-08-29 15:01:42.455779', 'delta': '0:00:00.051352', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['070bd6071033'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 15:03:49.100328 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3eefe679e0ec', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 15:01:43.148429', 'end': '2025-08-29 15:01:43.179093', 'delta': '0:00:00.030664', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3eefe679e0ec'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 15:03:49.100357 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9874f98c4097', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 15:01:43.659645', 'end': '2025-08-29 15:01:43.712026', 'delta': '0:00:00.052381', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9874f98c4097'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 15:03:49.100364 | orchestrator | 2025-08-29 15:03:49.100369 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:03:49.100374 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.409) 0:00:10.276 ********* 2025-08-29 15:03:49.100380 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.100385 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.100391 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.100396 | orchestrator | 2025-08-29 15:03:49.100401 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:03:49.100406 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.465) 0:00:10.741 ********* 2025-08-29 15:03:49.100412 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 15:03:49.100417 | orchestrator | 2025-08-29 15:03:49.100423 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:03:49.100434 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:01.758) 0:00:12.500 ********* 2025-08-29 15:03:49.100439 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100445 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100450 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100455 | orchestrator | 2025-08-29 15:03:49.100461 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:03:49.100466 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.302) 0:00:12.803 ********* 2025-08-29 15:03:49.100471 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100477 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100482 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100488 | orchestrator | 2025-08-29 15:03:49.100493 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:03:49.100498 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.414) 0:00:13.217 ********* 2025-08-29 15:03:49.100504 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100509 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100514 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100520 | orchestrator | 2025-08-29 15:03:49.100525 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:03:49.100531 | orchestrator | Friday 29 August 2025 15:01:49 +0000 (0:00:00.488) 0:00:13.706 ********* 2025-08-29 15:03:49.100536 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.100542 | orchestrator | 2025-08-29 15:03:49.100547 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:03:49.100552 | orchestrator | Friday 29 August 2025 15:01:49 +0000 (0:00:00.145) 0:00:13.851 ********* 2025-08-29 15:03:49.100558 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100563 | orchestrator | 2025-08-29 15:03:49.100568 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:03:49.100574 | orchestrator | Friday 29 August 2025 15:01:49 +0000 (0:00:00.239) 0:00:14.091 ********* 2025-08-29 15:03:49.100579 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100584 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100590 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100595 | orchestrator | 2025-08-29 15:03:49.100600 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:03:49.100650 | orchestrator | Friday 29 August 2025 15:01:49 +0000 (0:00:00.289) 0:00:14.380 ********* 2025-08-29 15:03:49.100656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100661 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100675 | orchestrator | 2025-08-29 15:03:49.100681 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:03:49.100687 | orchestrator | Friday 29 August 2025 15:01:50 +0000 (0:00:00.329) 0:00:14.710 ********* 2025-08-29 15:03:49.100693 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100699 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100705 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100711 | orchestrator | 2025-08-29 15:03:49.100717 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:03:49.100723 | orchestrator | Friday 29 August 2025 15:01:50 +0000 (0:00:00.512) 0:00:15.223 ********* 2025-08-29 15:03:49.100729 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100735 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100742 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100748 | orchestrator | 2025-08-29 15:03:49.100753 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:03:49.100760 | orchestrator | Friday 29 August 2025 15:01:51 +0000 (0:00:00.338) 0:00:15.561 ********* 2025-08-29 15:03:49.100765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100772 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100783 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100789 | orchestrator | 2025-08-29 15:03:49.100795 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:03:49.100801 | orchestrator | Friday 29 August 2025 15:01:51 +0000 (0:00:00.350) 0:00:15.912 ********* 2025-08-29 15:03:49.100807 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100818 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100824 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100833 | orchestrator | 2025-08-29 15:03:49.100842 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:03:49.100878 | orchestrator | Friday 29 August 2025 15:01:51 +0000 (0:00:00.327) 0:00:16.239 ********* 2025-08-29 15:03:49.100889 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.100898 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.100907 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.100913 | orchestrator | 2025-08-29 15:03:49.100918 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:03:49.100924 | orchestrator | Friday 29 August 2025 15:01:52 +0000 (0:00:00.535) 0:00:16.775 ********* 2025-08-29 15:03:49.100930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3', 'dm-uuid-LVM-uD6Mo9vRae6rhHQ3Cv8iBIHiOkh7vDv3P02FpXK4GRvrM2StMq05gwLQahS4Aim9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4', 'dm-uuid-LVM-OpprThIuZ7OCUBOX6wZncT3Dym3eACA2PsddSGncHVfpnqMc8ruraJK2Q8IEJ5jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.100999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc', 'dm-uuid-LVM-201yAH0joyzRFH6sqEqXj7oSaWavLiqWRSVrbAZzOt1xNf7XwuDo3oXGgvcSNdIa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OOqYyD-X3ep-idi1-Ed6C-DyzY-wRSz-fgidv8', 'scsi-0QEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714', 'scsi-SQEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca', 'dm-uuid-LVM-gEFnHyeHdbJqeHuGQxKJcMhhl1Ir8Lgl2cv6rd0M49f0CvZBMvhuDshIUQj7B0B8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N17mz5-EUQz-V9n7-C4vu-3ISy-nma3-edJzNt', 'scsi-0QEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c', 'scsi-SQEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34', 'scsi-SQEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101143 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.101148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PK1oQc-PafE-SXTJ-gC8J-TqLc-GIC3-HqLAAe', 'scsi-0QEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b', 'scsi-SQEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Exi2ct-gAdU-6Qq1-Ctrc-d3jT-eYnt-ALlvmg', 'scsi-0QEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95', 'scsi-SQEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d', 'scsi-SQEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5', 'dm-uuid-LVM-ntkdiD7zsbM03QLUVmvmszSkPpDq2T3WNBLwRo0cmvTQbmNZXYXSsFfmJNZl8Ng2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791', 'dm-uuid-LVM-mlP7WRc7Ld5D4hI6Q71tFUCAmKO8L6bM0SFe3GwFSt363EKsLiZu4Xr2Fcm4SqAg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101238 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.101244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:49.101301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BtT3it-8JPO-VgWx-exfl-04Wt-TZQB-eSuhRn', 'scsi-0QEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9', 'scsi-SQEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IUdtJC-z0Mo-rn1o-MAmW-S78C-2oty-9gBk4d', 'scsi-0QEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b', 'scsi-SQEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598', 'scsi-SQEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:49.101341 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.101347 | orchestrator | 2025-08-29 15:03:49.101352 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:03:49.101358 | orchestrator | Friday 29 August 2025 15:01:52 +0000 (0:00:00.549) 0:00:17.325 ********* 2025-08-29 15:03:49.101364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3', 'dm-uuid-LVM-uD6Mo9vRae6rhHQ3Cv8iBIHiOkh7vDv3P02FpXK4GRvrM2StMq05gwLQahS4Aim9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4', 'dm-uuid-LVM-OpprThIuZ7OCUBOX6wZncT3Dym3eACA2PsddSGncHVfpnqMc8ruraJK2Q8IEJ5jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc', 'dm-uuid-LVM-201yAH0joyzRFH6sqEqXj7oSaWavLiqWRSVrbAZzOt1xNf7XwuDo3oXGgvcSNdIa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca', 'dm-uuid-LVM-gEFnHyeHdbJqeHuGQxKJcMhhl1Ir8Lgl2cv6rd0M49f0CvZBMvhuDshIUQj7B0B8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba98bb24-5383-4aa5-9967-a5d28a51fb78-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dda150a8--39d5--5493--abc9--b03fdb7d62e3-osd--block--dda150a8--39d5--5493--abc9--b03fdb7d62e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OOqYyD-X3ep-idi1-Ed6C-DyzY-wRSz-fgidv8', 'scsi-0QEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714', 'scsi-SQEMU_QEMU_HARDDISK_133692c3-7f4d-47c9-95e5-0fdaff452714'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c0ce2805--49d2--5cc8--844e--183b484fa1c4-osd--block--c0ce2805--49d2--5cc8--844e--183b484fa1c4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N17mz5-EUQz-V9n7-C4vu-3ISy-nma3-edJzNt', 'scsi-0QEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c', 'scsi-SQEMU_QEMU_HARDDISK_26305d6e-8929-43c9-b467-e677b222946c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34', 'scsi-SQEMU_QEMU_HARDDISK_c2014fab-2f96-4ac7-a596-9bdfe7e77c34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101593 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101599 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.101679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5', 'dm-uuid-LVM-ntkdiD7zsbM03QLUVmvmszSkPpDq2T3WNBLwRo0cmvTQbmNZXYXSsFfmJNZl8Ng2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101865 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16', 'scsi-SQEMU_QEMU_HARDDISK_d62da809-aa2e-4162-92b4-e8a8bc4be399-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791', 'dm-uuid-LVM-mlP7WRc7Ld5D4hI6Q71tFUCAmKO8L6bM0SFe3GwFSt363EKsLiZu4Xr2Fcm4SqAg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--346e0f34--2e25--5bf0--9181--de3fb405aafc-osd--block--346e0f34--2e25--5bf0--9181--de3fb405aafc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PK1oQc-PafE-SXTJ-gC8J-TqLc-GIC3-HqLAAe', 'scsi-0QEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b', 'scsi-SQEMU_QEMU_HARDDISK_c30c9ad8-fb52-441e-a5e8-07e208e64b3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ca3f02ac--b393--504d--bf7e--2b1a4059feca-osd--block--ca3f02ac--b393--504d--bf7e--2b1a4059feca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Exi2ct-gAdU-6Qq1-Ctrc-d3jT-eYnt-ALlvmg', 'scsi-0QEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95', 'scsi-SQEMU_QEMU_HARDDISK_4ef0722a-e89c-418b-acd0-a0241f1ecb95'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d', 'scsi-SQEMU_QEMU_HARDDISK_e378fad3-fb01-4445-a487-4c35c34fc10d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.101984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.102003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3e9de19-b9f7-492e-a1b9-2626c456e661-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5-osd--block--bbd8d281--36ff--5086--a3ca--2bb41bb9eed5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BtT3it-8JPO-VgWx-exfl-04Wt-TZQB-eSuhRn', 'scsi-0QEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9', 'scsi-SQEMU_QEMU_HARDDISK_01581e26-5f0c-4aa4-b2ea-55eb57d083c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102209 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d9c5dbd3--dfd6--59a8--a565--791b79996791-osd--block--d9c5dbd3--dfd6--59a8--a565--791b79996791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IUdtJC-z0Mo-rn1o-MAmW-S78C-2oty-9gBk4d', 'scsi-0QEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b', 'scsi-SQEMU_QEMU_HARDDISK_6d7b9fe8-4cbf-41b4-b3de-8e907a23c66b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598', 'scsi-SQEMU_QEMU_HARDDISK_733ac04c-b863-4853-ba25-ee7fcff80598'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:49.102249 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.102263 | orchestrator | 2025-08-29 15:03:49.102273 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:03:49.102283 | orchestrator | Friday 29 August 2025 15:01:53 +0000 (0:00:00.592) 0:00:17.917 ********* 2025-08-29 15:03:49.102292 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.102302 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.102311 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.102320 | orchestrator | 2025-08-29 15:03:49.102329 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:03:49.102338 | orchestrator | Friday 29 August 2025 15:01:54 +0000 (0:00:00.751) 0:00:18.668 ********* 2025-08-29 15:03:49.102347 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.102356 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.102364 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.102373 | orchestrator | 2025-08-29 15:03:49.102382 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:03:49.102391 | orchestrator | Friday 29 August 2025 15:01:54 +0000 (0:00:00.461) 0:00:19.130 ********* 2025-08-29 15:03:49.102399 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.102408 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.102418 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.102427 | orchestrator | 2025-08-29 15:03:49.102436 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:03:49.102445 | orchestrator | Friday 29 August 2025 15:01:55 +0000 (0:00:00.715) 0:00:19.846 ********* 2025-08-29 15:03:49.102454 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.102463 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.102473 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.102482 | orchestrator | 2025-08-29 15:03:49.102490 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:03:49.102500 | orchestrator | Friday 29 August 2025 15:01:55 +0000 (0:00:00.292) 0:00:20.139 ********* 2025-08-29 15:03:49.102509 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.102518 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.102527 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.102536 | orchestrator | 2025-08-29 15:03:49.102545 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:03:49.102554 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:00.423) 0:00:20.562 ********* 2025-08-29 15:03:49.102564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.102573 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.102582 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.102591 | orchestrator | 2025-08-29 15:03:49.102600 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:03:49.102682 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:00.480) 0:00:21.042 ********* 2025-08-29 15:03:49.102692 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:03:49.102702 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:03:49.102711 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:03:49.102720 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:03:49.102729 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:03:49.102738 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:03:49.102747 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:03:49.102756 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:03:49.102765 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:03:49.102773 | orchestrator | 2025-08-29 15:03:49.102782 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:03:49.102791 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:00.848) 0:00:21.891 ********* 2025-08-29 15:03:49.102800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:03:49.102809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:03:49.102825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:03:49.102834 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.102842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:03:49.102852 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:03:49.102861 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:03:49.102870 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.102878 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:03:49.102887 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:03:49.102896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:03:49.102905 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.102913 | orchestrator | 2025-08-29 15:03:49.102922 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:03:49.102931 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:00.376) 0:00:22.267 ********* 2025-08-29 15:03:49.102940 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:49.102949 | orchestrator | 2025-08-29 15:03:49.102959 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:03:49.102972 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.723) 0:00:22.991 ********* 2025-08-29 15:03:49.102978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.102983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.102989 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.102994 | orchestrator | 2025-08-29 15:03:49.103005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:03:49.103010 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.337) 0:00:23.328 ********* 2025-08-29 15:03:49.103016 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.103021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.103027 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.103032 | orchestrator | 2025-08-29 15:03:49.103038 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:03:49.103043 | orchestrator | Friday 29 August 2025 15:01:59 +0000 (0:00:00.306) 0:00:23.635 ********* 2025-08-29 15:03:49.103049 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.103054 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.103060 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:49.103065 | orchestrator | 2025-08-29 15:03:49.103070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:03:49.103076 | orchestrator | Friday 29 August 2025 15:01:59 +0000 (0:00:00.321) 0:00:23.956 ********* 2025-08-29 15:03:49.103081 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.103087 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.103092 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.103097 | orchestrator | 2025-08-29 15:03:49.103103 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:03:49.103108 | orchestrator | Friday 29 August 2025 15:02:00 +0000 (0:00:00.593) 0:00:24.550 ********* 2025-08-29 15:03:49.103114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:49.103119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:49.103125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:49.103130 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.103135 | orchestrator | 2025-08-29 15:03:49.103141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:03:49.103146 | orchestrator | Friday 29 August 2025 15:02:00 +0000 (0:00:00.379) 0:00:24.929 ********* 2025-08-29 15:03:49.103152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:49.103157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:49.103168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:49.103173 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.103179 | orchestrator | 2025-08-29 15:03:49.103184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:03:49.103190 | orchestrator | Friday 29 August 2025 15:02:00 +0000 (0:00:00.459) 0:00:25.388 ********* 2025-08-29 15:03:49.103195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:49.103201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:49.103206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:49.103211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.103217 | orchestrator | 2025-08-29 15:03:49.103222 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:03:49.103228 | orchestrator | Friday 29 August 2025 15:02:01 +0000 (0:00:00.434) 0:00:25.823 ********* 2025-08-29 15:03:49.103233 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:49.103238 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:49.103244 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:49.103249 | orchestrator | 2025-08-29 15:03:49.103255 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:03:49.103260 | orchestrator | Friday 29 August 2025 15:02:01 +0000 (0:00:00.334) 0:00:26.158 ********* 2025-08-29 15:03:49.103265 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:03:49.103271 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:03:49.103276 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:03:49.103282 | orchestrator | 2025-08-29 15:03:49.103287 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:03:49.103292 | orchestrator | Friday 29 August 2025 15:02:02 +0000 (0:00:00.501) 0:00:26.659 ********* 2025-08-29 15:03:49.103298 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:49.103303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:49.103309 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:49.103314 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:03:49.103320 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:03:49.103325 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:03:49.103330 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:03:49.103336 | orchestrator | 2025-08-29 15:03:49.103341 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:03:49.103347 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:01.012) 0:00:27.672 ********* 2025-08-29 15:03:49.103352 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:49.103357 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:49.103363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:49.103368 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:03:49.103373 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:03:49.103385 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:03:49.103390 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:03:49.103396 | orchestrator | 2025-08-29 15:03:49.103405 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 15:03:49.103410 | orchestrator | Friday 29 August 2025 15:02:05 +0000 (0:00:01.967) 0:00:29.640 ********* 2025-08-29 15:03:49.103416 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:49.103426 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:49.103432 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 15:03:49.103437 | orchestrator | 2025-08-29 15:03:49.103443 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 15:03:49.103448 | orchestrator | Friday 29 August 2025 15:02:05 +0000 (0:00:00.395) 0:00:30.035 ********* 2025-08-29 15:03:49.103454 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:49.103461 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:49.103467 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:49.103472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:49.103478 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:49.103484 | orchestrator | 2025-08-29 15:03:49.103489 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 15:03:49.103495 | orchestrator | Friday 29 August 2025 15:02:51 +0000 (0:00:45.535) 0:01:15.571 ********* 2025-08-29 15:03:49.103500 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103505 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103511 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103522 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103532 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:03:49.103538 | orchestrator | 2025-08-29 15:03:49.103543 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 15:03:49.103549 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:25.552) 0:01:41.123 ********* 2025-08-29 15:03:49.103554 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103559 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103565 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103570 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103575 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103586 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:49.103596 | orchestrator | 2025-08-29 15:03:49.103601 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 15:03:49.103624 | orchestrator | Friday 29 August 2025 15:03:28 +0000 (0:00:11.988) 0:01:53.111 ********* 2025-08-29 15:03:49.103634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103640 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:49.103645 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:49.103650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103660 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:49.103666 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:49.103675 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103680 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:49.103685 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:49.103691 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103696 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:49.103702 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:49.103707 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103712 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:49.103718 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:49.103723 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:49.103728 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:49.103733 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:49.103739 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 15:03:49.103744 | orchestrator | 2025-08-29 15:03:49.103750 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:49.103755 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 15:03:49.103762 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 15:03:49.103768 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 15:03:49.103773 | orchestrator | 2025-08-29 15:03:49.103778 | orchestrator | 2025-08-29 15:03:49.103784 | orchestrator | 2025-08-29 15:03:49.103789 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:49.103795 | orchestrator | Friday 29 August 2025 15:03:45 +0000 (0:00:17.299) 0:02:10.411 ********* 2025-08-29 15:03:49.103800 | orchestrator | =============================================================================== 2025-08-29 15:03:49.103805 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.54s 2025-08-29 15:03:49.103812 | orchestrator | generate keys ---------------------------------------------------------- 25.55s 2025-08-29 15:03:49.103820 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.30s 2025-08-29 15:03:49.103829 | orchestrator | get keys from monitors ------------------------------------------------- 11.99s 2025-08-29 15:03:49.103838 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.14s 2025-08-29 15:03:49.103846 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.97s 2025-08-29 15:03:49.103861 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2025-08-29 15:03:49.103871 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2025-08-29 15:03:49.103880 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-08-29 15:03:49.103890 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.84s 2025-08-29 15:03:49.103896 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-08-29 15:03:49.103901 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2025-08-29 15:03:49.103906 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2025-08-29 15:03:49.103912 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2025-08-29 15:03:49.103917 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2025-08-29 15:03:49.103922 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2025-08-29 15:03:49.103928 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.67s 2025-08-29 15:03:49.103933 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-08-29 15:03:49.103938 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2025-08-29 15:03:49.103944 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2025-08-29 15:03:49.103949 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:49.103955 | orchestrator | 2025-08-29 15:03:49 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:03:49.103960 | orchestrator | 2025-08-29 15:03:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:52.138407 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:52.141754 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:52.143258 | orchestrator | 2025-08-29 15:03:52 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:03:52.143290 | orchestrator | 2025-08-29 15:03:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:55.190374 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:55.190471 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:55.191909 | orchestrator | 2025-08-29 15:03:55 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:03:55.191945 | orchestrator | 2025-08-29 15:03:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:58.238783 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:03:58.239627 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:03:58.241015 | orchestrator | 2025-08-29 15:03:58 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:03:58.241045 | orchestrator | 2025-08-29 15:03:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:01.291564 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:01.293685 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:01.296109 | orchestrator | 2025-08-29 15:04:01 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:04:01.296470 | orchestrator | 2025-08-29 15:04:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:04.343991 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:04.345390 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:04.346817 | orchestrator | 2025-08-29 15:04:04 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:04:04.347384 | orchestrator | 2025-08-29 15:04:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:07.402332 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:07.404092 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:07.405541 | orchestrator | 2025-08-29 15:04:07 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:04:07.405665 | orchestrator | 2025-08-29 15:04:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:10.452244 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:10.452323 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:10.453159 | orchestrator | 2025-08-29 15:04:10 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:04:10.453282 | orchestrator | 2025-08-29 15:04:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:13.500446 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:13.502445 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:13.504160 | orchestrator | 2025-08-29 15:04:13 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state STARTED 2025-08-29 15:04:13.504482 | orchestrator | 2025-08-29 15:04:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:16.552852 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:16.554435 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:16.555518 | orchestrator | 2025-08-29 15:04:16 | INFO  | Task 396626cc-6016-4c26-ae5b-3a41e01e648e is in state SUCCESS 2025-08-29 15:04:16.555813 | orchestrator | 2025-08-29 15:04:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:19.595473 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:19.596928 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:19.599147 | orchestrator | 2025-08-29 15:04:19 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:19.599296 | orchestrator | 2025-08-29 15:04:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:22.649256 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:22.651073 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:22.653248 | orchestrator | 2025-08-29 15:04:22 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:22.653329 | orchestrator | 2025-08-29 15:04:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:25.703353 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:25.704331 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:25.706091 | orchestrator | 2025-08-29 15:04:25 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:25.706220 | orchestrator | 2025-08-29 15:04:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:28.745078 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:28.747917 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:28.750457 | orchestrator | 2025-08-29 15:04:28 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:28.750557 | orchestrator | 2025-08-29 15:04:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:31.793442 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:31.794842 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:31.796107 | orchestrator | 2025-08-29 15:04:31 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:31.796150 | orchestrator | 2025-08-29 15:04:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:34.847501 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:34.848321 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:34.849815 | orchestrator | 2025-08-29 15:04:34 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:34.849859 | orchestrator | 2025-08-29 15:04:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:37.882775 | orchestrator | 2025-08-29 15:04:37 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:37.883317 | orchestrator | 2025-08-29 15:04:37 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:37.884201 | orchestrator | 2025-08-29 15:04:37 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:37.884224 | orchestrator | 2025-08-29 15:04:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:40.927453 | orchestrator | 2025-08-29 15:04:40 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:40.928617 | orchestrator | 2025-08-29 15:04:40 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:40.930797 | orchestrator | 2025-08-29 15:04:40 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:40.930898 | orchestrator | 2025-08-29 15:04:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:43.970636 | orchestrator | 2025-08-29 15:04:43 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:43.971538 | orchestrator | 2025-08-29 15:04:43 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:43.973184 | orchestrator | 2025-08-29 15:04:43 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:43.973225 | orchestrator | 2025-08-29 15:04:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:47.010419 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state STARTED 2025-08-29 15:04:47.013049 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:47.014523 | orchestrator | 2025-08-29 15:04:47 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:47.014611 | orchestrator | 2025-08-29 15:04:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:50.064839 | orchestrator | 2025-08-29 15:04:50.064935 | orchestrator | 2025-08-29 15:04:50.064948 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 15:04:50.064955 | orchestrator | 2025-08-29 15:04:50.064962 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 15:04:50.064970 | orchestrator | Friday 29 August 2025 15:03:49 +0000 (0:00:00.148) 0:00:00.148 ********* 2025-08-29 15:04:50.064977 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 15:04:50.064986 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.064993 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065088 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:04:50.065097 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065105 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 15:04:50.065111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 15:04:50.065116 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:04:50.065122 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 15:04:50.065128 | orchestrator | 2025-08-29 15:04:50.065134 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 15:04:50.065308 | orchestrator | Friday 29 August 2025 15:03:54 +0000 (0:00:04.084) 0:00:04.233 ********* 2025-08-29 15:04:50.065324 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 15:04:50.065331 | orchestrator | 2025-08-29 15:04:50.065338 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 15:04:50.065345 | orchestrator | Friday 29 August 2025 15:03:55 +0000 (0:00:01.064) 0:00:05.297 ********* 2025-08-29 15:04:50.065351 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 15:04:50.065359 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065367 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065375 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:04:50.065381 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065388 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 15:04:50.065395 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 15:04:50.065402 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:04:50.065408 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 15:04:50.065414 | orchestrator | 2025-08-29 15:04:50.065420 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 15:04:50.065427 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:13.330) 0:00:18.628 ********* 2025-08-29 15:04:50.065477 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 15:04:50.065485 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065502 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065508 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:04:50.065516 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:04:50.065524 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 15:04:50.065532 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 15:04:50.065592 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:04:50.065601 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 15:04:50.065610 | orchestrator | 2025-08-29 15:04:50.065617 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:50.065624 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:04:50.065633 | orchestrator | 2025-08-29 15:04:50.065639 | orchestrator | 2025-08-29 15:04:50.065646 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:50.065653 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:06.987) 0:00:25.616 ********* 2025-08-29 15:04:50.065660 | orchestrator | =============================================================================== 2025-08-29 15:04:50.065667 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.33s 2025-08-29 15:04:50.065689 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.99s 2025-08-29 15:04:50.065696 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.09s 2025-08-29 15:04:50.065703 | orchestrator | Create share directory -------------------------------------------------- 1.06s 2025-08-29 15:04:50.065711 | orchestrator | 2025-08-29 15:04:50.065717 | orchestrator | 2025-08-29 15:04:50.065725 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:04:50.065732 | orchestrator | 2025-08-29 15:04:50.065758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:04:50.065765 | orchestrator | Friday 29 August 2025 15:02:53 +0000 (0:00:00.285) 0:00:00.285 ********* 2025-08-29 15:04:50.065772 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.065780 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.065787 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.065794 | orchestrator | 2025-08-29 15:04:50.065801 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:04:50.065808 | orchestrator | Friday 29 August 2025 15:02:53 +0000 (0:00:00.373) 0:00:00.658 ********* 2025-08-29 15:04:50.065815 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 15:04:50.065823 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 15:04:50.065830 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 15:04:50.065837 | orchestrator | 2025-08-29 15:04:50.065844 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 15:04:50.065852 | orchestrator | 2025-08-29 15:04:50.065859 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:04:50.065866 | orchestrator | Friday 29 August 2025 15:02:53 +0000 (0:00:00.422) 0:00:01.081 ********* 2025-08-29 15:04:50.065874 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:50.065881 | orchestrator | 2025-08-29 15:04:50.065889 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 15:04:50.065895 | orchestrator | Friday 29 August 2025 15:02:54 +0000 (0:00:00.494) 0:00:01.575 ********* 2025-08-29 15:04:50.065909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.065948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.065962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.065969 | orchestrator | 2025-08-29 15:04:50.065975 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 15:04:50.065982 | orchestrator | Friday 29 August 2025 15:02:55 +0000 (0:00:01.159) 0:00:02.735 ********* 2025-08-29 15:04:50.065987 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.065993 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.065999 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.066008 | orchestrator | 2025-08-29 15:04:50.066070 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:04:50.066080 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.674) 0:00:03.409 ********* 2025-08-29 15:04:50.066087 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:04:50.066102 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:04:50.066110 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:04:50.066118 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:04:50.066127 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:04:50.066134 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:04:50.066142 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:04:50.066150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:04:50.066158 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:04:50.066172 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:04:50.066179 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:04:50.066186 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:04:50.066193 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:04:50.066200 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:04:50.066207 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:04:50.066214 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:04:50.066221 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:04:50.066229 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:04:50.066237 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:04:50.066244 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:04:50.066252 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:04:50.066259 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:04:50.066266 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:04:50.066274 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:04:50.066283 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 15:04:50.066293 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 15:04:50.066299 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 15:04:50.066305 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 15:04:50.066311 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 15:04:50.066316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 15:04:50.066322 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 15:04:50.066328 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 15:04:50.066334 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 15:04:50.066340 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 15:04:50.066346 | orchestrator | 2025-08-29 15:04:50.066353 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.066359 | orchestrator | Friday 29 August 2025 15:02:57 +0000 (0:00:00.768) 0:00:04.177 ********* 2025-08-29 15:04:50.066365 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.066376 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.066389 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.066395 | orchestrator | 2025-08-29 15:04:50.066401 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.066407 | orchestrator | Friday 29 August 2025 15:02:57 +0000 (0:00:00.313) 0:00:04.491 ********* 2025-08-29 15:04:50.066413 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066419 | orchestrator | 2025-08-29 15:04:50.066430 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.066436 | orchestrator | Friday 29 August 2025 15:02:57 +0000 (0:00:00.132) 0:00:04.624 ********* 2025-08-29 15:04:50.066442 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.066454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.066461 | orchestrator | 2025-08-29 15:04:50.066467 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.066473 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.471) 0:00:05.095 ********* 2025-08-29 15:04:50.066480 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.066486 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.066492 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.066497 | orchestrator | 2025-08-29 15:04:50.066504 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.066511 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.294) 0:00:05.390 ********* 2025-08-29 15:04:50.066517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066522 | orchestrator | 2025-08-29 15:04:50.066529 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.066535 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.151) 0:00:05.541 ********* 2025-08-29 15:04:50.066561 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066567 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.066572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.066578 | orchestrator | 2025-08-29 15:04:50.066584 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.066590 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.303) 0:00:05.845 ********* 2025-08-29 15:04:50.066596 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.066602 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.066607 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.066613 | orchestrator | 2025-08-29 15:04:50.066619 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.066624 | orchestrator | Friday 29 August 2025 15:02:59 +0000 (0:00:00.317) 0:00:06.162 ********* 2025-08-29 15:04:50.066630 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066636 | orchestrator | 2025-08-29 15:04:50.066641 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.066647 | orchestrator | Friday 29 August 2025 15:02:59 +0000 (0:00:00.330) 0:00:06.493 ********* 2025-08-29 15:04:50.066653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.066664 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.066669 | orchestrator | 2025-08-29 15:04:50.066674 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.066680 | orchestrator | Friday 29 August 2025 15:02:59 +0000 (0:00:00.322) 0:00:06.815 ********* 2025-08-29 15:04:50.066686 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.066691 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.066697 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.066703 | orchestrator | 2025-08-29 15:04:50.066709 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.066715 | orchestrator | Friday 29 August 2025 15:03:00 +0000 (0:00:00.331) 0:00:07.146 ********* 2025-08-29 15:04:50.066721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066727 | orchestrator | 2025-08-29 15:04:50.066732 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.066754 | orchestrator | Friday 29 August 2025 15:03:00 +0000 (0:00:00.150) 0:00:07.297 ********* 2025-08-29 15:04:50.066856 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.066868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.066873 | orchestrator | 2025-08-29 15:04:50.066880 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.066885 | orchestrator | Friday 29 August 2025 15:03:00 +0000 (0:00:00.322) 0:00:07.619 ********* 2025-08-29 15:04:50.066891 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.066897 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.066903 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.066908 | orchestrator | 2025-08-29 15:04:50.066914 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.066920 | orchestrator | Friday 29 August 2025 15:03:01 +0000 (0:00:00.518) 0:00:08.138 ********* 2025-08-29 15:04:50.066925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066931 | orchestrator | 2025-08-29 15:04:50.066937 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.066943 | orchestrator | Friday 29 August 2025 15:03:01 +0000 (0:00:00.135) 0:00:08.273 ********* 2025-08-29 15:04:50.066948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.066954 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.066960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.066966 | orchestrator | 2025-08-29 15:04:50.066972 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.066978 | orchestrator | Friday 29 August 2025 15:03:01 +0000 (0:00:00.335) 0:00:08.609 ********* 2025-08-29 15:04:50.066984 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.066990 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.066996 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.067002 | orchestrator | 2025-08-29 15:04:50.067008 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.067013 | orchestrator | Friday 29 August 2025 15:03:01 +0000 (0:00:00.328) 0:00:08.938 ********* 2025-08-29 15:04:50.067019 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067025 | orchestrator | 2025-08-29 15:04:50.067031 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.067036 | orchestrator | Friday 29 August 2025 15:03:02 +0000 (0:00:00.161) 0:00:09.100 ********* 2025-08-29 15:04:50.067115 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067122 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067129 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067135 | orchestrator | 2025-08-29 15:04:50.067141 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.067147 | orchestrator | Friday 29 August 2025 15:03:02 +0000 (0:00:00.552) 0:00:09.653 ********* 2025-08-29 15:04:50.067153 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.067169 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.067175 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.067181 | orchestrator | 2025-08-29 15:04:50.067187 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.067193 | orchestrator | Friday 29 August 2025 15:03:02 +0000 (0:00:00.312) 0:00:09.965 ********* 2025-08-29 15:04:50.067199 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067205 | orchestrator | 2025-08-29 15:04:50.067211 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.067217 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.128) 0:00:10.093 ********* 2025-08-29 15:04:50.067224 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067230 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067236 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067243 | orchestrator | 2025-08-29 15:04:50.067249 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.067255 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.368) 0:00:10.462 ********* 2025-08-29 15:04:50.067270 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.067276 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.067282 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.067288 | orchestrator | 2025-08-29 15:04:50.067294 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.067300 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.310) 0:00:10.772 ********* 2025-08-29 15:04:50.067306 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067312 | orchestrator | 2025-08-29 15:04:50.067319 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.067324 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.188) 0:00:10.961 ********* 2025-08-29 15:04:50.067330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067335 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067341 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067347 | orchestrator | 2025-08-29 15:04:50.067353 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.067358 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:00.513) 0:00:11.475 ********* 2025-08-29 15:04:50.067364 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.067371 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.067378 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.067384 | orchestrator | 2025-08-29 15:04:50.067391 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.067397 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:00.344) 0:00:11.819 ********* 2025-08-29 15:04:50.067404 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067410 | orchestrator | 2025-08-29 15:04:50.067417 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.067424 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:00.133) 0:00:11.952 ********* 2025-08-29 15:04:50.067430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067437 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067451 | orchestrator | 2025-08-29 15:04:50.067457 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:04:50.067463 | orchestrator | Friday 29 August 2025 15:03:05 +0000 (0:00:00.296) 0:00:12.249 ********* 2025-08-29 15:04:50.067470 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:50.067477 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:50.067483 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:50.067490 | orchestrator | 2025-08-29 15:04:50.067497 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:04:50.067504 | orchestrator | Friday 29 August 2025 15:03:05 +0000 (0:00:00.527) 0:00:12.776 ********* 2025-08-29 15:04:50.067511 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067517 | orchestrator | 2025-08-29 15:04:50.067524 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:04:50.067531 | orchestrator | Friday 29 August 2025 15:03:05 +0000 (0:00:00.179) 0:00:12.956 ********* 2025-08-29 15:04:50.067538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067594 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067601 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067608 | orchestrator | 2025-08-29 15:04:50.067615 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 15:04:50.067622 | orchestrator | Friday 29 August 2025 15:03:06 +0000 (0:00:00.339) 0:00:13.295 ********* 2025-08-29 15:04:50.067628 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:50.067635 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:50.067642 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:50.067649 | orchestrator | 2025-08-29 15:04:50.067655 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 15:04:50.067662 | orchestrator | Friday 29 August 2025 15:03:07 +0000 (0:00:01.770) 0:00:15.065 ********* 2025-08-29 15:04:50.067678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:04:50.067686 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:04:50.067693 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:04:50.067700 | orchestrator | 2025-08-29 15:04:50.067706 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 15:04:50.067713 | orchestrator | Friday 29 August 2025 15:03:10 +0000 (0:00:02.207) 0:00:17.272 ********* 2025-08-29 15:04:50.067720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:04:50.067733 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:04:50.067739 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:04:50.067747 | orchestrator | 2025-08-29 15:04:50.067754 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 15:04:50.067768 | orchestrator | Friday 29 August 2025 15:03:12 +0000 (0:00:02.590) 0:00:19.863 ********* 2025-08-29 15:04:50.067776 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:04:50.067784 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:04:50.067791 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:04:50.067799 | orchestrator | 2025-08-29 15:04:50.067807 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 15:04:50.067816 | orchestrator | Friday 29 August 2025 15:03:14 +0000 (0:00:01.540) 0:00:21.403 ********* 2025-08-29 15:04:50.067823 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067831 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067839 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067847 | orchestrator | 2025-08-29 15:04:50.067855 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 15:04:50.067863 | orchestrator | Friday 29 August 2025 15:03:14 +0000 (0:00:00.304) 0:00:21.708 ********* 2025-08-29 15:04:50.067870 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.067878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.067886 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.067893 | orchestrator | 2025-08-29 15:04:50.067901 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:04:50.067909 | orchestrator | Friday 29 August 2025 15:03:14 +0000 (0:00:00.296) 0:00:22.005 ********* 2025-08-29 15:04:50.067917 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:50.067924 | orchestrator | 2025-08-29 15:04:50.067931 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 15:04:50.067937 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.787) 0:00:22.793 ********* 2025-08-29 15:04:50.067946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.067975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.067986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.067999 | orchestrator | 2025-08-29 15:04:50.068011 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 15:04:50.068020 | orchestrator | Friday 29 August 2025 15:03:17 +0000 (0:00:01.460) 0:00:24.253 ********* 2025-08-29 15:04:50.068034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/koll2025-08-29 15:04:50 | INFO  | Task f09e7685-e4d3-4e59-a579-dc37e9622f1f is in state SUCCESS 2025-08-29 15:04:50.068044 | orchestrator | a/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:04:50.068071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.068092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:04:50.068101 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.068110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:04:50.068125 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.068131 | orchestrator | 2025-08-29 15:04:50.068139 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 15:04:50.068145 | orchestrator | Friday 29 August 2025 15:03:17 +0000 (0:00:00.661) 0:00:24.914 ********* 2025-08-29 15:04:50.068161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:04:50.068169 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.068176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:04:50.068188 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.068204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:04:50.068212 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.068219 | orchestrator | 2025-08-29 15:04:50.068226 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 15:04:50.068233 | orchestrator | Friday 29 August 2025 15:03:19 +0000 (0:00:01.230) 0:00:26.145 ********* 2025-08-29 15:04:50.068240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.068266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.068276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:04:50.068283 | orchestrator | 2025-08-29 15:04:50.068290 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:04:50.068299 | orchestrator | Friday 29 August 2025 15:03:20 +0000 (0:00:01.524) 0:00:27.670 ********* 2025-08-29 15:04:50.068306 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:50.068313 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:50.068320 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:50.068326 | orchestrator | 2025-08-29 15:04:50.068333 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:04:50.068344 | orchestrator | Friday 29 August 2025 15:03:20 +0000 (0:00:00.311) 0:00:27.981 ********* 2025-08-29 15:04:50.068351 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:50.068357 | orchestrator | 2025-08-29 15:04:50.068363 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 15:04:50.068369 | orchestrator | Friday 29 August 2025 15:03:21 +0000 (0:00:00.784) 0:00:28.766 ********* 2025-08-29 15:04:50.068375 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:50.068381 | orchestrator | 2025-08-29 15:04:50.068386 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 15:04:50.068392 | orchestrator | Friday 29 August 2025 15:03:23 +0000 (0:00:02.263) 0:00:31.029 ********* 2025-08-29 15:04:50.068398 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:50.068403 | orchestrator | 2025-08-29 15:04:50.068409 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 15:04:50.068415 | orchestrator | Friday 29 August 2025 15:03:26 +0000 (0:00:02.310) 0:00:33.340 ********* 2025-08-29 15:04:50.068421 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:50.068431 | orchestrator | 2025-08-29 15:04:50.068438 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:04:50.068443 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:15.982) 0:00:49.322 ********* 2025-08-29 15:04:50.068449 | orchestrator | 2025-08-29 15:04:50.068454 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:04:50.068460 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:00.070) 0:00:49.393 ********* 2025-08-29 15:04:50.068466 | orchestrator | 2025-08-29 15:04:50.068473 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:04:50.068479 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:00.071) 0:00:49.464 ********* 2025-08-29 15:04:50.068484 | orchestrator | 2025-08-29 15:04:50.068491 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 15:04:50.068497 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:00.073) 0:00:49.537 ********* 2025-08-29 15:04:50.068503 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:50.068509 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:50.068514 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:50.068520 | orchestrator | 2025-08-29 15:04:50.068527 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:50.068533 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 15:04:50.068558 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:04:50.068565 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:04:50.068572 | orchestrator | 2025-08-29 15:04:50.068578 | orchestrator | 2025-08-29 15:04:50.068584 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:50.068590 | orchestrator | Friday 29 August 2025 15:04:46 +0000 (0:01:04.544) 0:01:54.082 ********* 2025-08-29 15:04:50.068596 | orchestrator | =============================================================================== 2025-08-29 15:04:50.068603 | orchestrator | horizon : Restart horizon container ------------------------------------ 64.54s 2025-08-29 15:04:50.068609 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.98s 2025-08-29 15:04:50.068615 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.59s 2025-08-29 15:04:50.068621 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.31s 2025-08-29 15:04:50.068627 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.26s 2025-08-29 15:04:50.068633 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.21s 2025-08-29 15:04:50.068639 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.77s 2025-08-29 15:04:50.068645 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2025-08-29 15:04:50.068652 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.52s 2025-08-29 15:04:50.068658 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.46s 2025-08-29 15:04:50.068664 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.23s 2025-08-29 15:04:50.068670 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2025-08-29 15:04:50.068676 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-08-29 15:04:50.068682 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-08-29 15:04:50.068688 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-08-29 15:04:50.068695 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.67s 2025-08-29 15:04:50.068706 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2025-08-29 15:04:50.068712 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-08-29 15:04:50.068723 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-08-29 15:04:50.068729 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-08-29 15:04:50.068735 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:50.068747 | orchestrator | 2025-08-29 15:04:50 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:50.068753 | orchestrator | 2025-08-29 15:04:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:53.105375 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:53.109309 | orchestrator | 2025-08-29 15:04:53 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:53.109376 | orchestrator | 2025-08-29 15:04:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:56.149875 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:56.149968 | orchestrator | 2025-08-29 15:04:56 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:56.149980 | orchestrator | 2025-08-29 15:04:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:59.192500 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:04:59.193396 | orchestrator | 2025-08-29 15:04:59 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:04:59.193433 | orchestrator | 2025-08-29 15:04:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:02.238704 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:05:02.240049 | orchestrator | 2025-08-29 15:05:02 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:02.240083 | orchestrator | 2025-08-29 15:05:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:05.286813 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:05:05.289039 | orchestrator | 2025-08-29 15:05:05 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:05.289414 | orchestrator | 2025-08-29 15:05:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:08.339870 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:05:08.341605 | orchestrator | 2025-08-29 15:05:08 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:08.341673 | orchestrator | 2025-08-29 15:05:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:11.387126 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:05:11.388072 | orchestrator | 2025-08-29 15:05:11 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:11.388128 | orchestrator | 2025-08-29 15:05:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:14.437396 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state STARTED 2025-08-29 15:05:14.439454 | orchestrator | 2025-08-29 15:05:14 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:14.439791 | orchestrator | 2025-08-29 15:05:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:17.490768 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task d6e534b6-5e16-4d39-89d7-d37ce4162408 is in state SUCCESS 2025-08-29 15:05:17.492348 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:17.494329 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task a47b2add-4c8e-43db-a4b6-4c069791d9b8 is in state STARTED 2025-08-29 15:05:17.496901 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:17.499339 | orchestrator | 2025-08-29 15:05:17 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:17.499420 | orchestrator | 2025-08-29 15:05:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:20.554995 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:20.558985 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task a47b2add-4c8e-43db-a4b6-4c069791d9b8 is in state STARTED 2025-08-29 15:05:20.559053 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:20.559064 | orchestrator | 2025-08-29 15:05:20 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:20.559074 | orchestrator | 2025-08-29 15:05:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:23.610272 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:23.610456 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:23.612451 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task a47b2add-4c8e-43db-a4b6-4c069791d9b8 is in state SUCCESS 2025-08-29 15:05:23.613244 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:23.614764 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:23.616155 | orchestrator | 2025-08-29 15:05:23 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:23.616197 | orchestrator | 2025-08-29 15:05:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:26.667257 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:26.667354 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:26.667366 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:26.669115 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:26.669620 | orchestrator | 2025-08-29 15:05:26 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:26.669664 | orchestrator | 2025-08-29 15:05:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:29.709979 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:29.710248 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:29.710989 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:29.711614 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:29.712347 | orchestrator | 2025-08-29 15:05:29 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:29.712383 | orchestrator | 2025-08-29 15:05:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:32.824657 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:32.826804 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:32.828641 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:32.830106 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:32.831731 | orchestrator | 2025-08-29 15:05:32 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:32.831752 | orchestrator | 2025-08-29 15:05:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:35.878180 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:35.878766 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:35.879858 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:35.880730 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:35.882285 | orchestrator | 2025-08-29 15:05:35 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:35.882356 | orchestrator | 2025-08-29 15:05:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:38.963363 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:38.963984 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:38.967803 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:38.968582 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:38.969062 | orchestrator | 2025-08-29 15:05:38 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:38.969088 | orchestrator | 2025-08-29 15:05:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:42.011653 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:42.012384 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:42.013415 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:42.014114 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:42.016108 | orchestrator | 2025-08-29 15:05:42 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:42.016152 | orchestrator | 2025-08-29 15:05:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:45.061978 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:45.062096 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:45.064702 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:45.066562 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state STARTED 2025-08-29 15:05:45.067834 | orchestrator | 2025-08-29 15:05:45 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:45.067871 | orchestrator | 2025-08-29 15:05:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:48.178476 | orchestrator | 2025-08-29 15:05:48.178610 | orchestrator | 2025-08-29 15:05:48.178624 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 15:05:48.178636 | orchestrator | 2025-08-29 15:05:48.178647 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 15:05:48.178659 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.211) 0:00:00.211 ********* 2025-08-29 15:05:48.178671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 15:05:48.178684 | orchestrator | 2025-08-29 15:05:48.178695 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 15:05:48.178706 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.206) 0:00:00.417 ********* 2025-08-29 15:05:48.178717 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 15:05:48.178800 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 15:05:48.178817 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 15:05:48.178907 | orchestrator | 2025-08-29 15:05:48.178920 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 15:05:48.178931 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:01.180) 0:00:01.598 ********* 2025-08-29 15:05:48.178943 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 15:05:48.179016 | orchestrator | 2025-08-29 15:05:48.179702 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 15:05:48.179752 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:01.187) 0:00:02.785 ********* 2025-08-29 15:05:48.179766 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:48.179777 | orchestrator | 2025-08-29 15:05:48.179788 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 15:05:48.179799 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:01.071) 0:00:03.856 ********* 2025-08-29 15:05:48.179810 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:48.179821 | orchestrator | 2025-08-29 15:05:48.179832 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 15:05:48.179843 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:00.924) 0:00:04.781 ********* 2025-08-29 15:05:48.179854 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 15:05:48.179866 | orchestrator | ok: [testbed-manager] 2025-08-29 15:05:48.179877 | orchestrator | 2025-08-29 15:05:48.179888 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 15:05:48.179899 | orchestrator | Friday 29 August 2025 15:05:05 +0000 (0:00:41.467) 0:00:46.248 ********* 2025-08-29 15:05:48.179910 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 15:05:48.179921 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 15:05:48.179933 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 15:05:48.179944 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:05:48.179974 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 15:05:48.179985 | orchestrator | 2025-08-29 15:05:48.179996 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 15:05:48.180008 | orchestrator | Friday 29 August 2025 15:05:09 +0000 (0:00:04.090) 0:00:50.339 ********* 2025-08-29 15:05:48.180041 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 15:05:48.180052 | orchestrator | 2025-08-29 15:05:48.180063 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 15:05:48.180074 | orchestrator | Friday 29 August 2025 15:05:10 +0000 (0:00:00.507) 0:00:50.847 ********* 2025-08-29 15:05:48.180085 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:05:48.180096 | orchestrator | 2025-08-29 15:05:48.180107 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 15:05:48.180118 | orchestrator | Friday 29 August 2025 15:05:10 +0000 (0:00:00.151) 0:00:50.998 ********* 2025-08-29 15:05:48.180129 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:05:48.180140 | orchestrator | 2025-08-29 15:05:48.180151 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 15:05:48.180162 | orchestrator | Friday 29 August 2025 15:05:10 +0000 (0:00:00.330) 0:00:51.329 ********* 2025-08-29 15:05:48.180173 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:48.180183 | orchestrator | 2025-08-29 15:05:48.180195 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 15:05:48.180207 | orchestrator | Friday 29 August 2025 15:05:12 +0000 (0:00:01.805) 0:00:53.134 ********* 2025-08-29 15:05:48.180220 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:48.180233 | orchestrator | 2025-08-29 15:05:48.180246 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 15:05:48.180259 | orchestrator | Friday 29 August 2025 15:05:13 +0000 (0:00:00.815) 0:00:53.950 ********* 2025-08-29 15:05:48.180271 | orchestrator | changed: [testbed-manager] 2025-08-29 15:05:48.180284 | orchestrator | 2025-08-29 15:05:48.180296 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 15:05:48.180309 | orchestrator | Friday 29 August 2025 15:05:13 +0000 (0:00:00.662) 0:00:54.612 ********* 2025-08-29 15:05:48.180323 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 15:05:48.180335 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 15:05:48.180348 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:05:48.180361 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 15:05:48.180373 | orchestrator | 2025-08-29 15:05:48.180386 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:48.180399 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:05:48.180413 | orchestrator | 2025-08-29 15:05:48.180426 | orchestrator | 2025-08-29 15:05:48.180530 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:48.180546 | orchestrator | Friday 29 August 2025 15:05:15 +0000 (0:00:01.526) 0:00:56.139 ********* 2025-08-29 15:05:48.180559 | orchestrator | =============================================================================== 2025-08-29 15:05:48.180570 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.47s 2025-08-29 15:05:48.180580 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.09s 2025-08-29 15:05:48.180591 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.81s 2025-08-29 15:05:48.180602 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-08-29 15:05:48.180613 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.19s 2025-08-29 15:05:48.180623 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2025-08-29 15:05:48.180634 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.07s 2025-08-29 15:05:48.180645 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2025-08-29 15:05:48.180655 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.82s 2025-08-29 15:05:48.180668 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2025-08-29 15:05:48.180698 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2025-08-29 15:05:48.180729 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2025-08-29 15:05:48.180747 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-08-29 15:05:48.180765 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-08-29 15:05:48.180784 | orchestrator | 2025-08-29 15:05:48.180803 | orchestrator | 2025-08-29 15:05:48.180821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:05:48.180839 | orchestrator | 2025-08-29 15:05:48.180855 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:05:48.180866 | orchestrator | Friday 29 August 2025 15:05:19 +0000 (0:00:00.190) 0:00:00.190 ********* 2025-08-29 15:05:48.180876 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.180887 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.180898 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.180909 | orchestrator | 2025-08-29 15:05:48.180920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:05:48.180930 | orchestrator | Friday 29 August 2025 15:05:20 +0000 (0:00:00.341) 0:00:00.532 ********* 2025-08-29 15:05:48.180941 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:05:48.180952 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:05:48.180962 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:05:48.180973 | orchestrator | 2025-08-29 15:05:48.180984 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 15:05:48.180994 | orchestrator | 2025-08-29 15:05:48.181006 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 15:05:48.181035 | orchestrator | Friday 29 August 2025 15:05:21 +0000 (0:00:00.784) 0:00:01.317 ********* 2025-08-29 15:05:48.181053 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.181073 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.181092 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.181110 | orchestrator | 2025-08-29 15:05:48.181128 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:48.181148 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:48.181168 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:48.181188 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:05:48.181199 | orchestrator | 2025-08-29 15:05:48.181210 | orchestrator | 2025-08-29 15:05:48.181221 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:48.181232 | orchestrator | Friday 29 August 2025 15:05:21 +0000 (0:00:00.859) 0:00:02.176 ********* 2025-08-29 15:05:48.181242 | orchestrator | =============================================================================== 2025-08-29 15:05:48.181253 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.86s 2025-08-29 15:05:48.181264 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-08-29 15:05:48.181274 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-08-29 15:05:48.181285 | orchestrator | 2025-08-29 15:05:48.181296 | orchestrator | 2025-08-29 15:05:48.181306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:05:48.181317 | orchestrator | 2025-08-29 15:05:48.181328 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:05:48.181339 | orchestrator | Friday 29 August 2025 15:02:53 +0000 (0:00:00.326) 0:00:00.326 ********* 2025-08-29 15:05:48.181350 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.181360 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.181382 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.181393 | orchestrator | 2025-08-29 15:05:48.181403 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:05:48.181414 | orchestrator | Friday 29 August 2025 15:02:53 +0000 (0:00:00.303) 0:00:00.630 ********* 2025-08-29 15:05:48.181425 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:05:48.181436 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:05:48.181447 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:05:48.181458 | orchestrator | 2025-08-29 15:05:48.181469 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 15:05:48.181480 | orchestrator | 2025-08-29 15:05:48.181598 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:05:48.181612 | orchestrator | Friday 29 August 2025 15:02:54 +0000 (0:00:00.413) 0:00:01.044 ********* 2025-08-29 15:05:48.181623 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:05:48.181634 | orchestrator | 2025-08-29 15:05:48.181645 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 15:05:48.181656 | orchestrator | Friday 29 August 2025 15:02:54 +0000 (0:00:00.541) 0:00:01.585 ********* 2025-08-29 15:05:48.181674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.181700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.181714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.181768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.181784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.181796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.181814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.181846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.181875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.181905 | orchestrator | 2025-08-29 15:05:48.181923 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 15:05:48.181940 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:01.909) 0:00:03.494 ********* 2025-08-29 15:05:48.181957 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 15:05:48.181974 | orchestrator | 2025-08-29 15:05:48.181989 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 15:05:48.182007 | orchestrator | Friday 29 August 2025 15:02:57 +0000 (0:00:00.986) 0:00:04.481 ********* 2025-08-29 15:05:48.182112 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.182130 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.182143 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.182153 | orchestrator | 2025-08-29 15:05:48.182163 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 15:05:48.182173 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.596) 0:00:05.078 ********* 2025-08-29 15:05:48.182183 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:05:48.182193 | orchestrator | 2025-08-29 15:05:48.182203 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:05:48.182252 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.782) 0:00:05.861 ********* 2025-08-29 15:05:48.182267 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:05:48.182284 | orchestrator | 2025-08-29 15:05:48.182299 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 15:05:48.182315 | orchestrator | Friday 29 August 2025 15:02:59 +0000 (0:00:00.560) 0:00:06.421 ********* 2025-08-29 15:05:48.182335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.182364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.182388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.182406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.182434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.182452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.182468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.182517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.182559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.182576 | orchestrator | 2025-08-29 15:05:48.182592 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 15:05:48.182606 | orchestrator | Friday 29 August 2025 15:03:02 +0000 (0:00:03.467) 0:00:09.889 ********* 2025-08-29 15:05:48.182629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:05:48.182640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.182651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:05:48.182661 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.182676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:05:48.182695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.182705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:05:48.182715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.182735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:05:48.182746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.182756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:05:48.182773 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.182783 | orchestrator | 2025-08-29 15:05:48.182793 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 15:05:48.182808 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.579) 0:00:10.468 ********* 2025-08-29 15:05:48.182819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:05:48.182829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.182849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fe2025-08-29 15:05:48 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:05:48.182861 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:48.182871 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:48.182881 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:48.182890 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 4ba6f4c8-08c5-4867-9eec-b551f7f26f19 is in state SUCCESS 2025-08-29 15:05:48.182902 | orchestrator | rnet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:05:48.182912 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.182922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:05:48.182943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.182954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:05:48.182964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.182983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:05:48.182995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:05:48.183021 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.183031 | orchestrator | 2025-08-29 15:05:48.183041 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 15:05:48.183051 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:00.776) 0:00:11.244 ********* 2025-08-29 15:05:48.183066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183186 | orchestrator | 2025-08-29 15:05:48.183196 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 15:05:48.183206 | orchestrator | Friday 29 August 2025 15:03:07 +0000 (0:00:03.496) 0:00:14.741 ********* 2025-08-29 15:05:48.183216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183410 | orchestrator | 2025-08-29 15:05:48.183424 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 15:05:48.183440 | orchestrator | Friday 29 August 2025 15:03:13 +0000 (0:00:05.532) 0:00:20.273 ********* 2025-08-29 15:05:48.183455 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.183470 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:05:48.183512 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:05:48.183529 | orchestrator | 2025-08-29 15:05:48.183545 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 15:05:48.183560 | orchestrator | Friday 29 August 2025 15:03:14 +0000 (0:00:01.535) 0:00:21.808 ********* 2025-08-29 15:05:48.183577 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.183593 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.183609 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.183623 | orchestrator | 2025-08-29 15:05:48.183632 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 15:05:48.183642 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.545) 0:00:22.354 ********* 2025-08-29 15:05:48.183664 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.183682 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.183692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.183702 | orchestrator | 2025-08-29 15:05:48.183712 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 15:05:48.183721 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.304) 0:00:22.658 ********* 2025-08-29 15:05:48.183731 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.183740 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.183750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.183759 | orchestrator | 2025-08-29 15:05:48.183769 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 15:05:48.183778 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:00.555) 0:00:23.213 ********* 2025-08-29 15:05:48.183789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.183862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:05:48.183872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.183907 | orchestrator | 2025-08-29 15:05:48.183917 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:05:48.183934 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:02.487) 0:00:25.701 ********* 2025-08-29 15:05:48.183944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.183954 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.183964 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.183973 | orchestrator | 2025-08-29 15:05:48.183983 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 15:05:48.183993 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:00.291) 0:00:25.993 ********* 2025-08-29 15:05:48.184002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:05:48.184013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:05:48.184022 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:05:48.184032 | orchestrator | 2025-08-29 15:05:48.184042 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 15:05:48.184057 | orchestrator | Friday 29 August 2025 15:03:21 +0000 (0:00:02.098) 0:00:28.091 ********* 2025-08-29 15:05:48.184068 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:05:48.184077 | orchestrator | 2025-08-29 15:05:48.184087 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 15:05:48.184096 | orchestrator | Friday 29 August 2025 15:03:22 +0000 (0:00:01.416) 0:00:29.507 ********* 2025-08-29 15:05:48.184106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.184115 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.184125 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.184135 | orchestrator | 2025-08-29 15:05:48.184144 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 15:05:48.184154 | orchestrator | Friday 29 August 2025 15:03:23 +0000 (0:00:00.587) 0:00:30.095 ********* 2025-08-29 15:05:48.184163 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:05:48.184173 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:05:48.184183 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:05:48.184192 | orchestrator | 2025-08-29 15:05:48.184202 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 15:05:48.184211 | orchestrator | Friday 29 August 2025 15:03:24 +0000 (0:00:00.999) 0:00:31.094 ********* 2025-08-29 15:05:48.184221 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.184231 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.184240 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.184250 | orchestrator | 2025-08-29 15:05:48.184259 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 15:05:48.184269 | orchestrator | Friday 29 August 2025 15:03:24 +0000 (0:00:00.310) 0:00:31.405 ********* 2025-08-29 15:05:48.184278 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:05:48.184302 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:05:48.184322 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:05:48.184332 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:05:48.184349 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:05:48.184365 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:05:48.184382 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:05:48.184399 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:05:48.184416 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:05:48.184442 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:05:48.184457 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:05:48.184469 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:05:48.184555 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:05:48.184575 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:05:48.184592 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:05:48.184609 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:05:48.184626 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:05:48.184636 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:05:48.184646 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:05:48.184655 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:05:48.184665 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:05:48.184674 | orchestrator | 2025-08-29 15:05:48.184684 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 15:05:48.184693 | orchestrator | Friday 29 August 2025 15:03:33 +0000 (0:00:08.763) 0:00:40.169 ********* 2025-08-29 15:05:48.184702 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:05:48.184712 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:05:48.184721 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:05:48.184731 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:05:48.184740 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:05:48.184772 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:05:48.184783 | orchestrator | 2025-08-29 15:05:48.184792 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 15:05:48.184809 | orchestrator | Friday 29 August 2025 15:03:35 +0000 (0:00:02.631) 0:00:42.801 ********* 2025-08-29 15:05:48.184821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.184833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.184859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:05:48.184870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.184887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.184897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:05:48.184907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.184923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.184938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:05:48.184948 | orchestrator | 2025-08-29 15:05:48.184958 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:05:48.184968 | orchestrator | Friday 29 August 2025 15:03:38 +0000 (0:00:02.235) 0:00:45.036 ********* 2025-08-29 15:05:48.184978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.184987 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.184997 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.185007 | orchestrator | 2025-08-29 15:05:48.185016 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 15:05:48.185026 | orchestrator | Friday 29 August 2025 15:03:38 +0000 (0:00:00.314) 0:00:45.350 ********* 2025-08-29 15:05:48.185039 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185055 | orchestrator | 2025-08-29 15:05:48.185071 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 15:05:48.185083 | orchestrator | Friday 29 August 2025 15:03:40 +0000 (0:00:02.206) 0:00:47.556 ********* 2025-08-29 15:05:48.185097 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185111 | orchestrator | 2025-08-29 15:05:48.185125 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 15:05:48.185138 | orchestrator | Friday 29 August 2025 15:03:42 +0000 (0:00:02.189) 0:00:49.746 ********* 2025-08-29 15:05:48.185147 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.185155 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.185163 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.185170 | orchestrator | 2025-08-29 15:05:48.185178 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 15:05:48.185186 | orchestrator | Friday 29 August 2025 15:03:44 +0000 (0:00:01.481) 0:00:51.227 ********* 2025-08-29 15:05:48.185194 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.185202 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.185210 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.185218 | orchestrator | 2025-08-29 15:05:48.185226 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 15:05:48.185239 | orchestrator | Friday 29 August 2025 15:03:44 +0000 (0:00:00.308) 0:00:51.536 ********* 2025-08-29 15:05:48.185247 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.185255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.185262 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.185281 | orchestrator | 2025-08-29 15:05:48.185289 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 15:05:48.185297 | orchestrator | Friday 29 August 2025 15:03:44 +0000 (0:00:00.290) 0:00:51.826 ********* 2025-08-29 15:05:48.185305 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185313 | orchestrator | 2025-08-29 15:05:48.185320 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 15:05:48.185328 | orchestrator | Friday 29 August 2025 15:03:58 +0000 (0:00:14.138) 0:01:05.965 ********* 2025-08-29 15:05:48.185336 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185344 | orchestrator | 2025-08-29 15:05:48.185351 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:05:48.185359 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:09.844) 0:01:15.809 ********* 2025-08-29 15:05:48.185367 | orchestrator | 2025-08-29 15:05:48.185374 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:05:48.185382 | orchestrator | Friday 29 August 2025 15:04:08 +0000 (0:00:00.063) 0:01:15.873 ********* 2025-08-29 15:05:48.185390 | orchestrator | 2025-08-29 15:05:48.185398 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:05:48.185406 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:00.271) 0:01:16.145 ********* 2025-08-29 15:05:48.185419 | orchestrator | 2025-08-29 15:05:48.185443 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 15:05:48.185457 | orchestrator | Friday 29 August 2025 15:04:09 +0000 (0:00:00.070) 0:01:16.216 ********* 2025-08-29 15:05:48.185469 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185478 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:05:48.185516 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:05:48.185524 | orchestrator | 2025-08-29 15:05:48.185532 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 15:05:48.185540 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:27.994) 0:01:44.211 ********* 2025-08-29 15:05:48.185548 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185555 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:05:48.185563 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:05:48.185571 | orchestrator | 2025-08-29 15:05:48.185579 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 15:05:48.185587 | orchestrator | Friday 29 August 2025 15:04:47 +0000 (0:00:10.281) 0:01:54.492 ********* 2025-08-29 15:05:48.185594 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:05:48.185602 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:05:48.185610 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185618 | orchestrator | 2025-08-29 15:05:48.185626 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:05:48.185634 | orchestrator | Friday 29 August 2025 15:04:55 +0000 (0:00:07.653) 0:02:02.146 ********* 2025-08-29 15:05:48.185641 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:05:48.185649 | orchestrator | 2025-08-29 15:05:48.185662 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 15:05:48.185670 | orchestrator | Friday 29 August 2025 15:04:55 +0000 (0:00:00.814) 0:02:02.961 ********* 2025-08-29 15:05:48.185678 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.185685 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:05:48.185693 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:05:48.185701 | orchestrator | 2025-08-29 15:05:48.185709 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 15:05:48.185717 | orchestrator | Friday 29 August 2025 15:04:56 +0000 (0:00:00.874) 0:02:03.835 ********* 2025-08-29 15:05:48.185724 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:05:48.185732 | orchestrator | 2025-08-29 15:05:48.185740 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 15:05:48.185748 | orchestrator | Friday 29 August 2025 15:04:58 +0000 (0:00:01.784) 0:02:05.620 ********* 2025-08-29 15:05:48.185762 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 15:05:48.185770 | orchestrator | 2025-08-29 15:05:48.185778 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 15:05:48.185786 | orchestrator | Friday 29 August 2025 15:05:09 +0000 (0:00:10.879) 0:02:16.500 ********* 2025-08-29 15:05:48.185793 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 15:05:48.185801 | orchestrator | 2025-08-29 15:05:48.185809 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 15:05:48.185817 | orchestrator | Friday 29 August 2025 15:05:34 +0000 (0:00:24.776) 0:02:41.277 ********* 2025-08-29 15:05:48.185824 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 15:05:48.185833 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 15:05:48.185840 | orchestrator | 2025-08-29 15:05:48.185848 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 15:05:48.185856 | orchestrator | Friday 29 August 2025 15:05:41 +0000 (0:00:07.013) 0:02:48.290 ********* 2025-08-29 15:05:48.185864 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.185872 | orchestrator | 2025-08-29 15:05:48.185879 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 15:05:48.185889 | orchestrator | Friday 29 August 2025 15:05:41 +0000 (0:00:00.131) 0:02:48.422 ********* 2025-08-29 15:05:48.185901 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.185914 | orchestrator | 2025-08-29 15:05:48.185932 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 15:05:48.185948 | orchestrator | Friday 29 August 2025 15:05:41 +0000 (0:00:00.331) 0:02:48.753 ********* 2025-08-29 15:05:48.185960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.185971 | orchestrator | 2025-08-29 15:05:48.185992 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 15:05:48.186005 | orchestrator | Friday 29 August 2025 15:05:41 +0000 (0:00:00.142) 0:02:48.895 ********* 2025-08-29 15:05:48.186056 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.186074 | orchestrator | 2025-08-29 15:05:48.186087 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 15:05:48.186098 | orchestrator | Friday 29 August 2025 15:05:42 +0000 (0:00:00.368) 0:02:49.264 ********* 2025-08-29 15:05:48.186106 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:05:48.186114 | orchestrator | 2025-08-29 15:05:48.186122 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:05:48.186130 | orchestrator | Friday 29 August 2025 15:05:45 +0000 (0:00:03.511) 0:02:52.776 ********* 2025-08-29 15:05:48.186137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:05:48.186145 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:05:48.186153 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:05:48.186161 | orchestrator | 2025-08-29 15:05:48.186169 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:48.186178 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 15:05:48.186188 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:05:48.186196 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:05:48.186203 | orchestrator | 2025-08-29 15:05:48.186211 | orchestrator | 2025-08-29 15:05:48.186219 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:48.186227 | orchestrator | Friday 29 August 2025 15:05:46 +0000 (0:00:00.461) 0:02:53.237 ********* 2025-08-29 15:05:48.186235 | orchestrator | =============================================================================== 2025-08-29 15:05:48.186250 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 27.99s 2025-08-29 15:05:48.186258 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.78s 2025-08-29 15:05:48.186266 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.14s 2025-08-29 15:05:48.186274 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.88s 2025-08-29 15:05:48.186282 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.28s 2025-08-29 15:05:48.186289 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.84s 2025-08-29 15:05:48.186297 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.76s 2025-08-29 15:05:48.186305 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.65s 2025-08-29 15:05:48.186313 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.01s 2025-08-29 15:05:48.186329 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.53s 2025-08-29 15:05:48.186337 | orchestrator | keystone : Creating default user role ----------------------------------- 3.51s 2025-08-29 15:05:48.186345 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.50s 2025-08-29 15:05:48.186353 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.47s 2025-08-29 15:05:48.186361 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.63s 2025-08-29 15:05:48.186368 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.49s 2025-08-29 15:05:48.186376 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2025-08-29 15:05:48.186384 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2025-08-29 15:05:48.186392 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.19s 2025-08-29 15:05:48.186399 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.10s 2025-08-29 15:05:48.186407 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2025-08-29 15:05:48.186415 | orchestrator | 2025-08-29 15:05:48 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:48.186423 | orchestrator | 2025-08-29 15:05:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:51.180546 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:05:51.180642 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:51.181249 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:51.182154 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:51.182863 | orchestrator | 2025-08-29 15:05:51 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:51.182931 | orchestrator | 2025-08-29 15:05:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:54.210310 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:05:54.211912 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:54.212309 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:54.212901 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:54.213398 | orchestrator | 2025-08-29 15:05:54 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:54.213430 | orchestrator | 2025-08-29 15:05:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:57.258403 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:05:57.259377 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:05:57.259895 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:05:57.260519 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:05:57.260943 | orchestrator | 2025-08-29 15:05:57 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:05:57.260975 | orchestrator | 2025-08-29 15:05:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:00.282518 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:00.282615 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:00.290890 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:00.290967 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:06:00.290975 | orchestrator | 2025-08-29 15:06:00 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:00.290982 | orchestrator | 2025-08-29 15:06:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:03.314081 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:03.314158 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:03.314592 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:03.316849 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state STARTED 2025-08-29 15:06:03.317281 | orchestrator | 2025-08-29 15:06:03 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:03.317312 | orchestrator | 2025-08-29 15:06:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:06.344609 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:06.347397 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:06.347874 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:06.349990 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task 9104836d-57ed-4a51-9972-a19a717d1503 is in state SUCCESS 2025-08-29 15:06:06.351131 | orchestrator | 2025-08-29 15:06:06 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:06.351175 | orchestrator | 2025-08-29 15:06:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:09.382869 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:09.383106 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:09.384046 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:09.387429 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:09.388064 | orchestrator | 2025-08-29 15:06:09 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:09.388088 | orchestrator | 2025-08-29 15:06:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:12.419820 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:12.420214 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:12.421297 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:12.422676 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:12.424516 | orchestrator | 2025-08-29 15:06:12 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:12.424573 | orchestrator | 2025-08-29 15:06:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:15.445757 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:15.446285 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:15.446981 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:15.447736 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:15.448548 | orchestrator | 2025-08-29 15:06:15 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:15.448576 | orchestrator | 2025-08-29 15:06:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:18.477616 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:18.477797 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:18.478687 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:18.479371 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:18.480028 | orchestrator | 2025-08-29 15:06:18 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:18.480095 | orchestrator | 2025-08-29 15:06:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:21.534804 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:21.535106 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:21.536090 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:21.536850 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:21.537864 | orchestrator | 2025-08-29 15:06:21 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:21.537888 | orchestrator | 2025-08-29 15:06:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:24.580221 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:24.580493 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:24.581019 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:24.581680 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:24.583559 | orchestrator | 2025-08-29 15:06:24 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:24.583616 | orchestrator | 2025-08-29 15:06:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:27.606780 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:27.607689 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:27.608009 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:27.609691 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:27.610208 | orchestrator | 2025-08-29 15:06:27 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:27.610247 | orchestrator | 2025-08-29 15:06:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:30.644661 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:30.644755 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:30.645950 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:30.647404 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:30.648094 | orchestrator | 2025-08-29 15:06:30 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:30.648118 | orchestrator | 2025-08-29 15:06:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:33.679721 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:33.679857 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:33.680557 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:33.681225 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:33.682544 | orchestrator | 2025-08-29 15:06:33 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:33.682642 | orchestrator | 2025-08-29 15:06:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:36.706649 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:36.706903 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:36.707759 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:36.708241 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:36.709120 | orchestrator | 2025-08-29 15:06:36 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:36.709657 | orchestrator | 2025-08-29 15:06:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:39.736640 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:39.738158 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:39.739517 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:39.741749 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:39.742797 | orchestrator | 2025-08-29 15:06:39 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:39.742836 | orchestrator | 2025-08-29 15:06:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:42.769989 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:42.770151 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:42.770764 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:42.771315 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:42.771820 | orchestrator | 2025-08-29 15:06:42 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:42.771913 | orchestrator | 2025-08-29 15:06:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:45.801893 | orchestrator | 2025-08-29 15:06:45 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:45.801984 | orchestrator | 2025-08-29 15:06:45 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:45.802320 | orchestrator | 2025-08-29 15:06:45 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:45.803054 | orchestrator | 2025-08-29 15:06:45 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:45.803645 | orchestrator | 2025-08-29 15:06:45 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:45.803670 | orchestrator | 2025-08-29 15:06:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:48.835098 | orchestrator | 2025-08-29 15:06:48 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:48.835173 | orchestrator | 2025-08-29 15:06:48 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:48.835714 | orchestrator | 2025-08-29 15:06:48 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:48.836283 | orchestrator | 2025-08-29 15:06:48 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:48.837375 | orchestrator | 2025-08-29 15:06:48 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:48.837515 | orchestrator | 2025-08-29 15:06:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:51.896568 | orchestrator | 2025-08-29 15:06:51 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:51.896630 | orchestrator | 2025-08-29 15:06:51 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:51.896636 | orchestrator | 2025-08-29 15:06:51 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:51.896642 | orchestrator | 2025-08-29 15:06:51 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:51.896647 | orchestrator | 2025-08-29 15:06:51 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state STARTED 2025-08-29 15:06:51.896671 | orchestrator | 2025-08-29 15:06:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:54.924771 | orchestrator | 2025-08-29 15:06:54.924828 | orchestrator | 2025-08-29 15:06:54.924837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:06:54.924846 | orchestrator | 2025-08-29 15:06:54.924853 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:06:54.924919 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.323) 0:00:00.323 ********* 2025-08-29 15:06:54.924930 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:54.924938 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:54.924945 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:54.924953 | orchestrator | ok: [testbed-manager] 2025-08-29 15:06:54.924960 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:06:54.924967 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:06:54.924974 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:06:54.924981 | orchestrator | 2025-08-29 15:06:54.924988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:06:54.924995 | orchestrator | Friday 29 August 2025 15:05:29 +0000 (0:00:01.024) 0:00:01.348 ********* 2025-08-29 15:06:54.925002 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925012 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925019 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925027 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925034 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925041 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925048 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 15:06:54.925055 | orchestrator | 2025-08-29 15:06:54.925062 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:06:54.925069 | orchestrator | 2025-08-29 15:06:54.925076 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 15:06:54.925084 | orchestrator | Friday 29 August 2025 15:05:29 +0000 (0:00:00.843) 0:00:02.192 ********* 2025-08-29 15:06:54.925091 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:06:54.925099 | orchestrator | 2025-08-29 15:06:54.925106 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 15:06:54.925113 | orchestrator | Friday 29 August 2025 15:05:32 +0000 (0:00:02.433) 0:00:04.626 ********* 2025-08-29 15:06:54.925120 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-08-29 15:06:54.925127 | orchestrator | 2025-08-29 15:06:54.925134 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 15:06:54.925142 | orchestrator | Friday 29 August 2025 15:05:36 +0000 (0:00:04.312) 0:00:08.938 ********* 2025-08-29 15:06:54.925149 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 15:06:54.925157 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 15:06:54.925164 | orchestrator | 2025-08-29 15:06:54.925171 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 15:06:54.925178 | orchestrator | Friday 29 August 2025 15:05:44 +0000 (0:00:07.473) 0:00:16.411 ********* 2025-08-29 15:06:54.925185 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:06:54.925193 | orchestrator | 2025-08-29 15:06:54.925200 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 15:06:54.925207 | orchestrator | Friday 29 August 2025 15:05:47 +0000 (0:00:03.760) 0:00:20.171 ********* 2025-08-29 15:06:54.925225 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:06:54.925232 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-08-29 15:06:54.925242 | orchestrator | 2025-08-29 15:06:54.925254 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 15:06:54.925262 | orchestrator | Friday 29 August 2025 15:05:52 +0000 (0:00:04.444) 0:00:24.616 ********* 2025-08-29 15:06:54.925269 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:06:54.925276 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-08-29 15:06:54.925284 | orchestrator | 2025-08-29 15:06:54.925291 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 15:06:54.925298 | orchestrator | Friday 29 August 2025 15:05:59 +0000 (0:00:07.291) 0:00:31.908 ********* 2025-08-29 15:06:54.925305 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-08-29 15:06:54.925312 | orchestrator | 2025-08-29 15:06:54.925319 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:06:54.925326 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925333 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925340 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925347 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925355 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925375 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925383 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925391 | orchestrator | 2025-08-29 15:06:54.925399 | orchestrator | 2025-08-29 15:06:54.925407 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:06:54.925414 | orchestrator | Friday 29 August 2025 15:06:05 +0000 (0:00:05.760) 0:00:37.668 ********* 2025-08-29 15:06:54.925440 | orchestrator | =============================================================================== 2025-08-29 15:06:54.925448 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.47s 2025-08-29 15:06:54.925456 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.29s 2025-08-29 15:06:54.925463 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.76s 2025-08-29 15:06:54.925474 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.44s 2025-08-29 15:06:54.925482 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.31s 2025-08-29 15:06:54.925488 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.76s 2025-08-29 15:06:54.925495 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.43s 2025-08-29 15:06:54.925501 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2025-08-29 15:06:54.925508 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-08-29 15:06:54.925515 | orchestrator | 2025-08-29 15:06:54.925523 | orchestrator | 2025-08-29 15:06:54.925531 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 15:06:54.925538 | orchestrator | 2025-08-29 15:06:54.925546 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 15:06:54.925553 | orchestrator | Friday 29 August 2025 15:05:19 +0000 (0:00:00.272) 0:00:00.273 ********* 2025-08-29 15:06:54.925560 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925573 | orchestrator | 2025-08-29 15:06:54.925580 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 15:06:54.925587 | orchestrator | Friday 29 August 2025 15:05:22 +0000 (0:00:02.359) 0:00:02.633 ********* 2025-08-29 15:06:54.925594 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925601 | orchestrator | 2025-08-29 15:06:54.925608 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 15:06:54.925615 | orchestrator | Friday 29 August 2025 15:05:23 +0000 (0:00:01.111) 0:00:03.744 ********* 2025-08-29 15:06:54.925622 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925629 | orchestrator | 2025-08-29 15:06:54.925636 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 15:06:54.925643 | orchestrator | Friday 29 August 2025 15:05:24 +0000 (0:00:01.194) 0:00:04.939 ********* 2025-08-29 15:06:54.925650 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925657 | orchestrator | 2025-08-29 15:06:54.925663 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 15:06:54.925670 | orchestrator | Friday 29 August 2025 15:05:26 +0000 (0:00:01.511) 0:00:06.451 ********* 2025-08-29 15:06:54.925677 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925684 | orchestrator | 2025-08-29 15:06:54.925691 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 15:06:54.925698 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:01.144) 0:00:07.595 ********* 2025-08-29 15:06:54.925705 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925712 | orchestrator | 2025-08-29 15:06:54.925719 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 15:06:54.925726 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:01.061) 0:00:08.657 ********* 2025-08-29 15:06:54.925733 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925740 | orchestrator | 2025-08-29 15:06:54.925747 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 15:06:54.925753 | orchestrator | Friday 29 August 2025 15:05:30 +0000 (0:00:02.055) 0:00:10.712 ********* 2025-08-29 15:06:54.925760 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925767 | orchestrator | 2025-08-29 15:06:54.925774 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 15:06:54.925781 | orchestrator | Friday 29 August 2025 15:05:32 +0000 (0:00:01.663) 0:00:12.376 ********* 2025-08-29 15:06:54.925788 | orchestrator | changed: [testbed-manager] 2025-08-29 15:06:54.925796 | orchestrator | 2025-08-29 15:06:54.925803 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 15:06:54.925810 | orchestrator | Friday 29 August 2025 15:06:29 +0000 (0:00:57.253) 0:01:09.629 ********* 2025-08-29 15:06:54.925817 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:06:54.925824 | orchestrator | 2025-08-29 15:06:54.925831 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:06:54.925838 | orchestrator | 2025-08-29 15:06:54.925845 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:06:54.925852 | orchestrator | Friday 29 August 2025 15:06:29 +0000 (0:00:00.137) 0:01:09.767 ********* 2025-08-29 15:06:54.925859 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:54.925866 | orchestrator | 2025-08-29 15:06:54.925873 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:06:54.925880 | orchestrator | 2025-08-29 15:06:54.925887 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:06:54.925894 | orchestrator | Friday 29 August 2025 15:06:41 +0000 (0:00:11.833) 0:01:21.601 ********* 2025-08-29 15:06:54.925901 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:54.925908 | orchestrator | 2025-08-29 15:06:54.925915 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:06:54.925922 | orchestrator | 2025-08-29 15:06:54.925929 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:06:54.925940 | orchestrator | Friday 29 August 2025 15:06:52 +0000 (0:00:11.125) 0:01:32.727 ********* 2025-08-29 15:06:54.925947 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:54.925954 | orchestrator | 2025-08-29 15:06:54.925965 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:06:54.925973 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 15:06:54.925980 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925987 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.925994 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:54.926001 | orchestrator | 2025-08-29 15:06:54.926008 | orchestrator | 2025-08-29 15:06:54.926061 | orchestrator | 2025-08-29 15:06:54.926071 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:06:54.926096 | orchestrator | Friday 29 August 2025 15:06:53 +0000 (0:00:01.096) 0:01:33.823 ********* 2025-08-29 15:06:54.926104 | orchestrator | =============================================================================== 2025-08-29 15:06:54.926111 | orchestrator | Create admin user ------------------------------------------------------ 57.25s 2025-08-29 15:06:54.926118 | orchestrator | Restart ceph manager service ------------------------------------------- 24.06s 2025-08-29 15:06:54.926125 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.36s 2025-08-29 15:06:54.926132 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2025-08-29 15:06:54.926139 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.66s 2025-08-29 15:06:54.926146 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.51s 2025-08-29 15:06:54.926154 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.19s 2025-08-29 15:06:54.926161 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.14s 2025-08-29 15:06:54.926168 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2025-08-29 15:06:54.926175 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.06s 2025-08-29 15:06:54.926182 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-08-29 15:06:54.926189 | orchestrator | 2025-08-29 15:06:54 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:54.926197 | orchestrator | 2025-08-29 15:06:54 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:54.926204 | orchestrator | 2025-08-29 15:06:54 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:54.926211 | orchestrator | 2025-08-29 15:06:54 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:54.926218 | orchestrator | 2025-08-29 15:06:54 | INFO  | Task 1462a1d3-690f-424b-b9a0-303bb1938f15 is in state SUCCESS 2025-08-29 15:06:54.926225 | orchestrator | 2025-08-29 15:06:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:57.947397 | orchestrator | 2025-08-29 15:06:57 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:06:57.947672 | orchestrator | 2025-08-29 15:06:57 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:06:57.948483 | orchestrator | 2025-08-29 15:06:57 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:06:57.950097 | orchestrator | 2025-08-29 15:06:57 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:06:57.950162 | orchestrator | 2025-08-29 15:06:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:00.979870 | orchestrator | 2025-08-29 15:07:00 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:00.980244 | orchestrator | 2025-08-29 15:07:00 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:00.980963 | orchestrator | 2025-08-29 15:07:00 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:00.981593 | orchestrator | 2025-08-29 15:07:00 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:00.981688 | orchestrator | 2025-08-29 15:07:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:04.016166 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:04.016255 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:04.019137 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:04.021359 | orchestrator | 2025-08-29 15:07:04 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:04.021483 | orchestrator | 2025-08-29 15:07:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:07.073680 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:07.073756 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:07.074390 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:07.075549 | orchestrator | 2025-08-29 15:07:07 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:07.075565 | orchestrator | 2025-08-29 15:07:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:10.142329 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:10.142505 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:10.143388 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:10.144231 | orchestrator | 2025-08-29 15:07:10 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:10.144268 | orchestrator | 2025-08-29 15:07:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:13.178136 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:13.178244 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:13.179161 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:13.181070 | orchestrator | 2025-08-29 15:07:13 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:13.181148 | orchestrator | 2025-08-29 15:07:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:16.241342 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:16.241628 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:16.242993 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:16.243738 | orchestrator | 2025-08-29 15:07:16 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:16.243782 | orchestrator | 2025-08-29 15:07:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:19.274621 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:19.274717 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:19.275153 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:19.275672 | orchestrator | 2025-08-29 15:07:19 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:19.275701 | orchestrator | 2025-08-29 15:07:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:22.315502 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:22.317809 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:22.318311 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:22.319175 | orchestrator | 2025-08-29 15:07:22 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:22.319238 | orchestrator | 2025-08-29 15:07:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:25.377745 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:25.380266 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:25.381882 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:25.383477 | orchestrator | 2025-08-29 15:07:25 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:25.383505 | orchestrator | 2025-08-29 15:07:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:28.438273 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:28.440202 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:28.442343 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:28.444136 | orchestrator | 2025-08-29 15:07:28 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:28.444173 | orchestrator | 2025-08-29 15:07:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:31.482387 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:31.483915 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:31.486145 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:31.488544 | orchestrator | 2025-08-29 15:07:31 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:31.489021 | orchestrator | 2025-08-29 15:07:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:34.534659 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:34.538801 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:34.540523 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:34.542414 | orchestrator | 2025-08-29 15:07:34 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:34.542560 | orchestrator | 2025-08-29 15:07:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:37.607295 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:37.609976 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:37.612557 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:37.613850 | orchestrator | 2025-08-29 15:07:37 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:37.613889 | orchestrator | 2025-08-29 15:07:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:40.656585 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:40.657666 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:40.660457 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:40.662987 | orchestrator | 2025-08-29 15:07:40 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:40.663040 | orchestrator | 2025-08-29 15:07:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:43.721777 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:43.723229 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:43.724950 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:43.726989 | orchestrator | 2025-08-29 15:07:43 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:43.727059 | orchestrator | 2025-08-29 15:07:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:46.761850 | orchestrator | 2025-08-29 15:07:46 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:46.761985 | orchestrator | 2025-08-29 15:07:46 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:46.762357 | orchestrator | 2025-08-29 15:07:46 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:46.763125 | orchestrator | 2025-08-29 15:07:46 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:46.763150 | orchestrator | 2025-08-29 15:07:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:49.845128 | orchestrator | 2025-08-29 15:07:49 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:49.845553 | orchestrator | 2025-08-29 15:07:49 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:49.846292 | orchestrator | 2025-08-29 15:07:49 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:49.846917 | orchestrator | 2025-08-29 15:07:49 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:49.847136 | orchestrator | 2025-08-29 15:07:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:52.882770 | orchestrator | 2025-08-29 15:07:52 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:52.885441 | orchestrator | 2025-08-29 15:07:52 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:52.885788 | orchestrator | 2025-08-29 15:07:52 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:52.886666 | orchestrator | 2025-08-29 15:07:52 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:52.886898 | orchestrator | 2025-08-29 15:07:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:55.914934 | orchestrator | 2025-08-29 15:07:55 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:55.915750 | orchestrator | 2025-08-29 15:07:55 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:55.916586 | orchestrator | 2025-08-29 15:07:55 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:55.918841 | orchestrator | 2025-08-29 15:07:55 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:55.919405 | orchestrator | 2025-08-29 15:07:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:58.971065 | orchestrator | 2025-08-29 15:07:58 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:07:58.974934 | orchestrator | 2025-08-29 15:07:58 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:07:58.977945 | orchestrator | 2025-08-29 15:07:58 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:07:58.980772 | orchestrator | 2025-08-29 15:07:58 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:07:58.980843 | orchestrator | 2025-08-29 15:07:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:02.028317 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:02.029475 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:02.030922 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:02.032487 | orchestrator | 2025-08-29 15:08:02 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:02.032519 | orchestrator | 2025-08-29 15:08:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:05.062526 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:05.062795 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:05.063575 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:05.064349 | orchestrator | 2025-08-29 15:08:05 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:05.064372 | orchestrator | 2025-08-29 15:08:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:08.129135 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:08.130464 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:08.130917 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:08.131639 | orchestrator | 2025-08-29 15:08:08 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:08.131672 | orchestrator | 2025-08-29 15:08:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:11.161479 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:11.162475 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:11.165410 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:11.165654 | orchestrator | 2025-08-29 15:08:11 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:11.165718 | orchestrator | 2025-08-29 15:08:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:14.208800 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:14.209011 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:14.210154 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:14.211009 | orchestrator | 2025-08-29 15:08:14 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:14.211114 | orchestrator | 2025-08-29 15:08:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:17.257881 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:17.259923 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:17.261646 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:17.264036 | orchestrator | 2025-08-29 15:08:17 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:17.264082 | orchestrator | 2025-08-29 15:08:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:20.304358 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:20.305750 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:20.307439 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:20.309663 | orchestrator | 2025-08-29 15:08:20 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:20.309692 | orchestrator | 2025-08-29 15:08:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:23.355995 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:23.357011 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:23.357968 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:23.359221 | orchestrator | 2025-08-29 15:08:23 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:23.359247 | orchestrator | 2025-08-29 15:08:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:26.400006 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:26.400107 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state STARTED 2025-08-29 15:08:26.401496 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:26.401557 | orchestrator | 2025-08-29 15:08:26 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:26.401570 | orchestrator | 2025-08-29 15:08:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:29.441470 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:29.443237 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task a6d0a3ac-cd72-448a-97d5-0afefced67c6 is in state SUCCESS 2025-08-29 15:08:29.444045 | orchestrator | 2025-08-29 15:08:29.444067 | orchestrator | 2025-08-29 15:08:29.444077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:08:29.444086 | orchestrator | 2025-08-29 15:08:29.444095 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:08:29.444104 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:00.307) 0:00:00.307 ********* 2025-08-29 15:08:29.444112 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:29.444121 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:29.444129 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:29.444135 | orchestrator | 2025-08-29 15:08:29.444141 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:08:29.444147 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.334) 0:00:00.641 ********* 2025-08-29 15:08:29.444154 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 15:08:29.444161 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 15:08:29.444168 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 15:08:29.444175 | orchestrator | 2025-08-29 15:08:29.444182 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 15:08:29.444190 | orchestrator | 2025-08-29 15:08:29.444199 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:29.444207 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.497) 0:00:01.139 ********* 2025-08-29 15:08:29.444215 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:29.444223 | orchestrator | 2025-08-29 15:08:29.444230 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 15:08:29.444249 | orchestrator | Friday 29 August 2025 15:05:29 +0000 (0:00:00.690) 0:00:01.830 ********* 2025-08-29 15:08:29.444258 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 15:08:29.444265 | orchestrator | 2025-08-29 15:08:29.444272 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 15:08:29.444280 | orchestrator | Friday 29 August 2025 15:05:34 +0000 (0:00:04.714) 0:00:06.544 ********* 2025-08-29 15:08:29.444287 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 15:08:29.444294 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 15:08:29.444302 | orchestrator | 2025-08-29 15:08:29.444310 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 15:08:29.444317 | orchestrator | Friday 29 August 2025 15:05:42 +0000 (0:00:07.964) 0:00:14.509 ********* 2025-08-29 15:08:29.444324 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 15:08:29.444456 | orchestrator | 2025-08-29 15:08:29.444466 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 15:08:29.444474 | orchestrator | Friday 29 August 2025 15:05:46 +0000 (0:00:04.050) 0:00:18.559 ********* 2025-08-29 15:08:29.444482 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:08:29.444489 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 15:08:29.444512 | orchestrator | 2025-08-29 15:08:29.444521 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 15:08:29.444529 | orchestrator | Friday 29 August 2025 15:05:50 +0000 (0:00:04.318) 0:00:22.878 ********* 2025-08-29 15:08:29.444538 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:08:29.444546 | orchestrator | 2025-08-29 15:08:29.444555 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 15:08:29.444562 | orchestrator | Friday 29 August 2025 15:05:54 +0000 (0:00:04.024) 0:00:26.902 ********* 2025-08-29 15:08:29.444571 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 15:08:29.444578 | orchestrator | 2025-08-29 15:08:29.444585 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 15:08:29.444594 | orchestrator | Friday 29 August 2025 15:05:58 +0000 (0:00:04.439) 0:00:31.341 ********* 2025-08-29 15:08:29.444614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.444629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.444643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.444653 | orchestrator | 2025-08-29 15:08:29.444661 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:29.444668 | orchestrator | Friday 29 August 2025 15:06:02 +0000 (0:00:03.331) 0:00:34.673 ********* 2025-08-29 15:08:29.444682 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:29.444691 | orchestrator | 2025-08-29 15:08:29.444700 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 15:08:29.444719 | orchestrator | Friday 29 August 2025 15:06:02 +0000 (0:00:00.552) 0:00:35.225 ********* 2025-08-29 15:08:29.444729 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.444737 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:29.444746 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:29.444754 | orchestrator | 2025-08-29 15:08:29.444761 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 15:08:29.444770 | orchestrator | Friday 29 August 2025 15:06:06 +0000 (0:00:03.493) 0:00:38.719 ********* 2025-08-29 15:08:29.444778 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:29.444787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:29.444796 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:29.444805 | orchestrator | 2025-08-29 15:08:29.444814 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 15:08:29.444822 | orchestrator | Friday 29 August 2025 15:06:08 +0000 (0:00:01.800) 0:00:40.520 ********* 2025-08-29 15:08:29.444830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:29.444847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:29.444855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:08:29.444862 | orchestrator | 2025-08-29 15:08:29.444871 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:08:29.444878 | orchestrator | Friday 29 August 2025 15:06:09 +0000 (0:00:01.141) 0:00:41.662 ********* 2025-08-29 15:08:29.444886 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:29.444895 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:29.444902 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:29.444910 | orchestrator | 2025-08-29 15:08:29.444919 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 15:08:29.444928 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:00.847) 0:00:42.510 ********* 2025-08-29 15:08:29.444936 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.444943 | orchestrator | 2025-08-29 15:08:29.444951 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 15:08:29.444960 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:00.135) 0:00:42.645 ********* 2025-08-29 15:08:29.444968 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.444977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445027 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445036 | orchestrator | 2025-08-29 15:08:29.445044 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:29.445051 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:00.342) 0:00:42.988 ********* 2025-08-29 15:08:29.445057 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:29.445065 | orchestrator | 2025-08-29 15:08:29.445072 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 15:08:29.445079 | orchestrator | Friday 29 August 2025 15:06:11 +0000 (0:00:00.599) 0:00:43.588 ********* 2025-08-29 15:08:29.445092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445126 | orchestrator | 2025-08-29 15:08:29.445134 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 15:08:29.445140 | orchestrator | Friday 29 August 2025 15:06:16 +0000 (0:00:04.944) 0:00:48.533 ********* 2025-08-29 15:08:29.445154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:29.445168 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:29.445186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:29.445213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445220 | orchestrator | 2025-08-29 15:08:29.445227 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 15:08:29.445235 | orchestrator | Friday 29 August 2025 15:06:19 +0000 (0:00:03.348) 0:00:51.882 ********* 2025-08-29 15:08:29.445246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:29.445255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:29.445283 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:08:29.445304 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445313 | orchestrator | 2025-08-29 15:08:29.445321 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 15:08:29.445329 | orchestrator | Friday 29 August 2025 15:06:22 +0000 (0:00:03.305) 0:00:55.187 ********* 2025-08-29 15:08:29.445337 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445354 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445362 | orchestrator | 2025-08-29 15:08:29.445370 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 15:08:29.445391 | orchestrator | Friday 29 August 2025 15:06:26 +0000 (0:00:03.506) 0:00:58.694 ********* 2025-08-29 15:08:29.445403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445443 | orchestrator | 2025-08-29 15:08:29.445451 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 15:08:29.445459 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:04.199) 0:01:02.894 ********* 2025-08-29 15:08:29.445466 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.445473 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:29.445480 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:29.445487 | orchestrator | 2025-08-29 15:08:29.445494 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 15:08:29.445505 | orchestrator | Friday 29 August 2025 15:06:37 +0000 (0:00:06.925) 0:01:09.820 ********* 2025-08-29 15:08:29.445512 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445519 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445526 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445532 | orchestrator | 2025-08-29 15:08:29.445539 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 15:08:29.445547 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:05.513) 0:01:15.333 ********* 2025-08-29 15:08:29.445554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445562 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445570 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445576 | orchestrator | 2025-08-29 15:08:29.445583 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 15:08:29.445590 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:04.774) 0:01:20.107 ********* 2025-08-29 15:08:29.445597 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445611 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445618 | orchestrator | 2025-08-29 15:08:29.445625 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 15:08:29.445633 | orchestrator | Friday 29 August 2025 15:06:51 +0000 (0:00:03.374) 0:01:23.481 ********* 2025-08-29 15:08:29.445642 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445658 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445665 | orchestrator | 2025-08-29 15:08:29.445672 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 15:08:29.445680 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:05.015) 0:01:28.497 ********* 2025-08-29 15:08:29.445692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445708 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445715 | orchestrator | 2025-08-29 15:08:29.445722 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 15:08:29.445729 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.255) 0:01:28.752 ********* 2025-08-29 15:08:29.445737 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:08:29.445744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445752 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:08:29.445761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445768 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:08:29.445776 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445784 | orchestrator | 2025-08-29 15:08:29.445791 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 15:08:29.445798 | orchestrator | Friday 29 August 2025 15:07:01 +0000 (0:00:04.851) 0:01:33.604 ********* 2025-08-29 15:08:29.445805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:08:29.445847 | orchestrator | 2025-08-29 15:08:29.445855 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:08:29.445863 | orchestrator | Friday 29 August 2025 15:07:06 +0000 (0:00:04.925) 0:01:38.529 ********* 2025-08-29 15:08:29.445872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:29.445881 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:29.445889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:29.445895 | orchestrator | 2025-08-29 15:08:29.445904 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 15:08:29.445913 | orchestrator | Friday 29 August 2025 15:07:06 +0000 (0:00:00.543) 0:01:39.073 ********* 2025-08-29 15:08:29.445921 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.445930 | orchestrator | 2025-08-29 15:08:29.445939 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 15:08:29.445947 | orchestrator | Friday 29 August 2025 15:07:09 +0000 (0:00:02.470) 0:01:41.543 ********* 2025-08-29 15:08:29.445955 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.445963 | orchestrator | 2025-08-29 15:08:29.445972 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 15:08:29.445979 | orchestrator | Friday 29 August 2025 15:07:11 +0000 (0:00:02.442) 0:01:43.986 ********* 2025-08-29 15:08:29.445986 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.445992 | orchestrator | 2025-08-29 15:08:29.445999 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 15:08:29.446011 | orchestrator | Friday 29 August 2025 15:07:13 +0000 (0:00:02.251) 0:01:46.237 ********* 2025-08-29 15:08:29.446051 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.446060 | orchestrator | 2025-08-29 15:08:29.446069 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 15:08:29.446077 | orchestrator | Friday 29 August 2025 15:07:43 +0000 (0:00:29.979) 0:02:16.217 ********* 2025-08-29 15:08:29.446086 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.446095 | orchestrator | 2025-08-29 15:08:29.446103 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:08:29.446110 | orchestrator | Friday 29 August 2025 15:07:46 +0000 (0:00:02.200) 0:02:18.417 ********* 2025-08-29 15:08:29.446118 | orchestrator | 2025-08-29 15:08:29.446126 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:08:29.446133 | orchestrator | Friday 29 August 2025 15:07:46 +0000 (0:00:00.421) 0:02:18.839 ********* 2025-08-29 15:08:29.446139 | orchestrator | 2025-08-29 15:08:29.446145 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:08:29.446153 | orchestrator | Friday 29 August 2025 15:07:46 +0000 (0:00:00.145) 0:02:18.984 ********* 2025-08-29 15:08:29.446161 | orchestrator | 2025-08-29 15:08:29.446168 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 15:08:29.446175 | orchestrator | Friday 29 August 2025 15:07:46 +0000 (0:00:00.160) 0:02:19.144 ********* 2025-08-29 15:08:29.446186 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:29.446194 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:29.446202 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:29.446209 | orchestrator | 2025-08-29 15:08:29.446217 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:08:29.446228 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:29.446236 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:29.446243 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:29.446250 | orchestrator | 2025-08-29 15:08:29.446258 | orchestrator | 2025-08-29 15:08:29.446265 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:08:29.446272 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:39.397) 0:02:58.541 ********* 2025-08-29 15:08:29.446279 | orchestrator | =============================================================================== 2025-08-29 15:08:29.446287 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.40s 2025-08-29 15:08:29.446295 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.98s 2025-08-29 15:08:29.446302 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.96s 2025-08-29 15:08:29.446310 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.93s 2025-08-29 15:08:29.446318 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.51s 2025-08-29 15:08:29.446326 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.02s 2025-08-29 15:08:29.446334 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.95s 2025-08-29 15:08:29.446341 | orchestrator | glance : Check glance containers ---------------------------------------- 4.93s 2025-08-29 15:08:29.446349 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.85s 2025-08-29 15:08:29.446356 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.77s 2025-08-29 15:08:29.446363 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.71s 2025-08-29 15:08:29.446371 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.44s 2025-08-29 15:08:29.446409 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.32s 2025-08-29 15:08:29.446419 | orchestrator | glance : Copying over config.json files for services -------------------- 4.20s 2025-08-29 15:08:29.446426 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.05s 2025-08-29 15:08:29.446434 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.02s 2025-08-29 15:08:29.446443 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.51s 2025-08-29 15:08:29.446451 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.49s 2025-08-29 15:08:29.446458 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.37s 2025-08-29 15:08:29.446466 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.35s 2025-08-29 15:08:29.446473 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:29.450108 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:29.450746 | orchestrator | 2025-08-29 15:08:29 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:29.450897 | orchestrator | 2025-08-29 15:08:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:32.477726 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:32.478057 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:32.478681 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:32.478723 | orchestrator | 2025-08-29 15:08:32 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:32.478735 | orchestrator | 2025-08-29 15:08:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:35.506333 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:35.506530 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:35.507291 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:35.507773 | orchestrator | 2025-08-29 15:08:35 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:35.507811 | orchestrator | 2025-08-29 15:08:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:38.542588 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:38.545241 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state STARTED 2025-08-29 15:08:38.546191 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:38.549067 | orchestrator | 2025-08-29 15:08:38 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:38.549127 | orchestrator | 2025-08-29 15:08:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:41.607661 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:41.610893 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:41.618618 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task a5b57f74-10a5-4701-a13d-8be885f81ebb is in state SUCCESS 2025-08-29 15:08:41.621122 | orchestrator | 2025-08-29 15:08:41.621639 | orchestrator | 2025-08-29 15:08:41.621679 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:08:41.621693 | orchestrator | 2025-08-29 15:08:41.621705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:08:41.621716 | orchestrator | Friday 29 August 2025 15:05:20 +0000 (0:00:00.282) 0:00:00.282 ********* 2025-08-29 15:08:41.621727 | orchestrator | ok: [testbed-manager] 2025-08-29 15:08:41.621932 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:41.621962 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:41.621976 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:41.621986 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:08:41.621997 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:08:41.622008 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:08:41.623745 | orchestrator | 2025-08-29 15:08:41.623778 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:08:41.623791 | orchestrator | Friday 29 August 2025 15:05:21 +0000 (0:00:00.913) 0:00:01.195 ********* 2025-08-29 15:08:41.623803 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623815 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623826 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623837 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623848 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623889 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623900 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 15:08:41.623911 | orchestrator | 2025-08-29 15:08:41.623922 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 15:08:41.623933 | orchestrator | 2025-08-29 15:08:41.623948 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:08:41.623966 | orchestrator | Friday 29 August 2025 15:05:21 +0000 (0:00:00.753) 0:00:01.949 ********* 2025-08-29 15:08:41.623978 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:08:41.623990 | orchestrator | 2025-08-29 15:08:41.624000 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 15:08:41.624009 | orchestrator | Friday 29 August 2025 15:05:23 +0000 (0:00:01.637) 0:00:03.586 ********* 2025-08-29 15:08:41.624023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624038 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:41.624074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.624331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624528 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624807 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:41.624826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.624869 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.624933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.625116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625230 | orchestrator | 2025-08-29 15:08:41.625246 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:08:41.625263 | orchestrator | Friday 29 August 2025 15:05:27 +0000 (0:00:04.062) 0:00:07.649 ********* 2025-08-29 15:08:41.625279 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:08:41.625296 | orchestrator | 2025-08-29 15:08:41.625311 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 15:08:41.625326 | orchestrator | Friday 29 August 2025 15:05:29 +0000 (0:00:01.657) 0:00:09.307 ********* 2025-08-29 15:08:41.625345 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:41.625366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625770 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.625782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.625907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.625932 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.625942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.625953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.625979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.626011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.626210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.626236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:41.626249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.626267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.626283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.626318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.626411 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.626435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.626483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.626503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.626519 | orchestrator | 2025-08-29 15:08:41.626535 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 15:08:41.626552 | orchestrator | Friday 29 August 2025 15:05:35 +0000 (0:00:06.248) 0:00:15.556 ********* 2025-08-29 15:08:41.626657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:08:41.626698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.626768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.626886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:08:41.626904 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.626916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.626926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.626937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627055 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.627123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.627145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.627336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.627528 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.627575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.627588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.627598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627628 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.627638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.627655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627709 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.627719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.627729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627756 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.627768 | orchestrator | 2025-08-29 15:08:41.627785 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 15:08:41.627801 | orchestrator | Friday 29 August 2025 15:05:37 +0000 (0:00:01.727) 0:00:17.283 ********* 2025-08-29 15:08:41.627818 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:08:41.627842 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.627859 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.627925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:08:41.627946 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.627974 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.627993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.628011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.628124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628178 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.628188 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.628198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.628213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:08:41.628292 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.628302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.628312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628332 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.628347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.628419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628443 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.628453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:08:41.628472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:08:41.628492 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.628502 | orchestrator | 2025-08-29 15:08:41.628512 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 15:08:41.628522 | orchestrator | Friday 29 August 2025 15:05:39 +0000 (0:00:02.033) 0:00:19.316 ********* 2025-08-29 15:08:41.628532 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:41.628548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628625 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.628655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628729 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628826 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:41.628844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628936 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.628971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.628995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.629005 | orchestrator | 2025-08-29 15:08:41.629015 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 15:08:41.629025 | orchestrator | Friday 29 August 2025 15:05:45 +0000 (0:00:06.261) 0:00:25.578 ********* 2025-08-29 15:08:41.629035 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:41.629044 | orchestrator | 2025-08-29 15:08:41.629054 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 15:08:41.629063 | orchestrator | Friday 29 August 2025 15:05:46 +0000 (0:00:01.159) 0:00:26.737 ********* 2025-08-29 15:08:41.629073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629089 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629135 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629147 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629157 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629167 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629177 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629235 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629320 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629333 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629354 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.629364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629403 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629438 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629481 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1313415, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6906602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629492 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629503 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629556 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629568 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629578 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629601 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629645 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629656 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629667 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1313434, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6967676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.629687 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629697 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629718 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629757 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629769 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629779 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629789 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629810 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629830 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629868 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629880 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629901 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629911 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1313410, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.629927 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629947 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.629998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630061 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630086 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630103 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630119 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630147 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630171 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630250 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630266 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630282 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630297 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1313426, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.630324 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630504 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630568 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630580 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630590 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630601 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630621 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630631 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630647 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630659 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630698 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630710 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630720 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630737 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630754 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630777 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630794 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1313407, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6875863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.630853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630873 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630890 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630914 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630928 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630945 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630957 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630975 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.630988 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631009 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631022 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631034 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631053 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631068 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631094 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631108 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631131 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631144 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631158 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631177 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631191 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1313417, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6907144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631214 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631243 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631252 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631260 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631276 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631288 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631302 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631311 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.631320 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631348 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631361 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631395 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.631410 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631430 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631444 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.631459 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631481 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1313424, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6937144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631519 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631527 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.631536 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631552 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.631561 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631569 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:08:41.631582 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.631598 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1313420, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6919193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631612 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1313413, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6897144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631653 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313433, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631662 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313403, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.686257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631671 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1313450, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7015212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631687 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1313429, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6957145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631697 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1313408, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6877143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631716 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1313404, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6867144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1313423, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6927145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631733 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1313422, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6924005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631742 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1313445, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.7007146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:08:41.631750 | orchestrator | 2025-08-29 15:08:41.631758 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 15:08:41.631766 | orchestrator | Friday 29 August 2025 15:06:13 +0000 (0:00:26.887) 0:00:53.625 ********* 2025-08-29 15:08:41.631826 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:41.631841 | orchestrator | 2025-08-29 15:08:41.631854 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 15:08:41.631867 | orchestrator | Friday 29 August 2025 15:06:14 +0000 (0:00:01.027) 0:00:54.652 ********* 2025-08-29 15:08:41.631880 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.631893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.631908 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.631922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.631935 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.631949 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:08:41.631962 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.631976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.631989 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.632003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632027 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.632046 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:41.632060 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.632074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632087 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.632101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632114 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.632126 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.632140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632153 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.632167 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632180 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.632194 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.632211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632219 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.632227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632235 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.632243 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.632251 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632259 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.632266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632274 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.632282 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.632290 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632298 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 15:08:41.632305 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:08:41.632313 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 15:08:41.632321 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:08:41.632329 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:08:41.632337 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:08:41.632345 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:08:41.632353 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:08:41.632361 | orchestrator | 2025-08-29 15:08:41.632368 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 15:08:41.632399 | orchestrator | Friday 29 August 2025 15:06:16 +0000 (0:00:01.833) 0:00:56.486 ********* 2025-08-29 15:08:41.632408 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:41.632416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.632424 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:41.632432 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.632440 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:41.632448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.632456 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:41.632464 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.632472 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:41.632480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.632495 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:08:41.632503 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.632511 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 15:08:41.632519 | orchestrator | 2025-08-29 15:08:41.632527 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 15:08:41.632535 | orchestrator | Friday 29 August 2025 15:06:33 +0000 (0:00:17.519) 0:01:14.005 ********* 2025-08-29 15:08:41.632542 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:41.632554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.632566 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:41.632574 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.632582 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:41.632590 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.632598 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:41.632606 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.632614 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:41.632622 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.632630 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:08:41.632638 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.632650 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 15:08:41.632658 | orchestrator | 2025-08-29 15:08:41.632666 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 15:08:41.632674 | orchestrator | Friday 29 August 2025 15:06:37 +0000 (0:00:03.238) 0:01:17.244 ********* 2025-08-29 15:08:41.632682 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:41.632692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.632700 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:41.632708 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:41.632721 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:41.632730 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.632738 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.632783 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.632797 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 15:08:41.632806 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:41.632815 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.632823 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:08:41.632830 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.632838 | orchestrator | 2025-08-29 15:08:41.632846 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 15:08:41.632856 | orchestrator | Friday 29 August 2025 15:06:39 +0000 (0:00:02.527) 0:01:19.771 ********* 2025-08-29 15:08:41.632869 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:41.632889 | orchestrator | 2025-08-29 15:08:41.632903 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 15:08:41.632915 | orchestrator | Friday 29 August 2025 15:06:40 +0000 (0:00:00.865) 0:01:20.637 ********* 2025-08-29 15:08:41.632929 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.632941 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.632953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.632965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.632979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.632992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.633006 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.633019 | orchestrator | 2025-08-29 15:08:41.633032 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 15:08:41.633043 | orchestrator | Friday 29 August 2025 15:06:41 +0000 (0:00:01.153) 0:01:21.791 ********* 2025-08-29 15:08:41.633051 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.633059 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.633066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.633074 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:41.633088 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:41.633097 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.633105 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:41.633148 | orchestrator | 2025-08-29 15:08:41.633165 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 15:08:41.633178 | orchestrator | Friday 29 August 2025 15:06:43 +0000 (0:00:02.371) 0:01:24.163 ********* 2025-08-29 15:08:41.633191 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633206 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633220 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.633233 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633247 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.633256 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.633263 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633271 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.633279 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633287 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.633295 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.633310 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:08:41.633318 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.633326 | orchestrator | 2025-08-29 15:08:41.633335 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 15:08:41.633349 | orchestrator | Friday 29 August 2025 15:06:46 +0000 (0:00:02.195) 0:01:26.358 ********* 2025-08-29 15:08:41.633362 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:41.633433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.633449 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:41.633463 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.633484 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:41.633499 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:41.633521 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.633533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.633541 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:41.633549 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.633556 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 15:08:41.633563 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:08:41.633578 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.633585 | orchestrator | 2025-08-29 15:08:41.633592 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 15:08:41.633598 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:01.649) 0:01:28.008 ********* 2025-08-29 15:08:41.633605 | orchestrator | [WARNING]: Skipped 2025-08-29 15:08:41.633612 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 15:08:41.633619 | orchestrator | due to this access issue: 2025-08-29 15:08:41.633626 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 15:08:41.633632 | orchestrator | not a directory 2025-08-29 15:08:41.633639 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:08:41.633646 | orchestrator | 2025-08-29 15:08:41.633653 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 15:08:41.633659 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:01.021) 0:01:29.029 ********* 2025-08-29 15:08:41.633666 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.633672 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.633679 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.633686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.633692 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.633699 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.633706 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.633718 | orchestrator | 2025-08-29 15:08:41.633728 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 15:08:41.633739 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:00.910) 0:01:29.940 ********* 2025-08-29 15:08:41.633750 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.633760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:08:41.633771 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:08:41.633782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:08:41.633793 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:08:41.633804 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:08:41.633816 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:08:41.633825 | orchestrator | 2025-08-29 15:08:41.633832 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 15:08:41.633838 | orchestrator | Friday 29 August 2025 15:06:50 +0000 (0:00:00.779) 0:01:30.719 ********* 2025-08-29 15:08:41.633846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:08:41.633895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.633918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.633925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.633943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.633965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.633976 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:08:41.633984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.633992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.633999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.634006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634102 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:08:41.634130 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:08:41.634142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.634155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.634163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.634170 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:08:41.634177 | orchestrator | 2025-08-29 15:08:41.634184 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 15:08:41.634191 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:04.689) 0:01:35.409 ********* 2025-08-29 15:08:41.634198 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 15:08:41.634209 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:41.634216 | orchestrator | 2025-08-29 15:08:41.634223 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634229 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:01.122) 0:01:36.531 ********* 2025-08-29 15:08:41.634236 | orchestrator | 2025-08-29 15:08:41.634243 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634249 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.062) 0:01:36.594 ********* 2025-08-29 15:08:41.634256 | orchestrator | 2025-08-29 15:08:41.634263 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634269 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.061) 0:01:36.656 ********* 2025-08-29 15:08:41.634276 | orchestrator | 2025-08-29 15:08:41.634282 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634289 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.183) 0:01:36.840 ********* 2025-08-29 15:08:41.634296 | orchestrator | 2025-08-29 15:08:41.634302 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634309 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.060) 0:01:36.900 ********* 2025-08-29 15:08:41.634316 | orchestrator | 2025-08-29 15:08:41.634322 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634332 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.060) 0:01:36.961 ********* 2025-08-29 15:08:41.634343 | orchestrator | 2025-08-29 15:08:41.634354 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:08:41.634364 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:00.112) 0:01:37.073 ********* 2025-08-29 15:08:41.634392 | orchestrator | 2025-08-29 15:08:41.634403 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 15:08:41.634413 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:00.168) 0:01:37.241 ********* 2025-08-29 15:08:41.634424 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:41.634439 | orchestrator | 2025-08-29 15:08:41.634454 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 15:08:41.634471 | orchestrator | Friday 29 August 2025 15:07:12 +0000 (0:00:15.808) 0:01:53.049 ********* 2025-08-29 15:08:41.634488 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:08:41.634503 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:08:41.634514 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:41.634525 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:41.634535 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:41.634545 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:41.634557 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:08:41.634568 | orchestrator | 2025-08-29 15:08:41.634585 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 15:08:41.634597 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:16.041) 0:02:09.091 ********* 2025-08-29 15:08:41.634604 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:41.634611 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:41.634617 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:41.634624 | orchestrator | 2025-08-29 15:08:41.634631 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 15:08:41.634637 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:05.579) 0:02:14.670 ********* 2025-08-29 15:08:41.634644 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:41.634650 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:41.634657 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:41.634664 | orchestrator | 2025-08-29 15:08:41.634670 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 15:08:41.634677 | orchestrator | Friday 29 August 2025 15:07:45 +0000 (0:00:10.505) 0:02:25.176 ********* 2025-08-29 15:08:41.634684 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:41.634696 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:41.634709 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:08:41.634715 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:08:41.634722 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:41.634728 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:08:41.634735 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:41.634742 | orchestrator | 2025-08-29 15:08:41.634748 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 15:08:41.634755 | orchestrator | Friday 29 August 2025 15:08:02 +0000 (0:00:17.773) 0:02:42.949 ********* 2025-08-29 15:08:41.634762 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:41.634768 | orchestrator | 2025-08-29 15:08:41.634775 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 15:08:41.634781 | orchestrator | Friday 29 August 2025 15:08:14 +0000 (0:00:11.518) 0:02:54.468 ********* 2025-08-29 15:08:41.634788 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:41.634795 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:41.634801 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:41.634808 | orchestrator | 2025-08-29 15:08:41.634814 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 15:08:41.634821 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:11.789) 0:03:06.258 ********* 2025-08-29 15:08:41.634828 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:41.634834 | orchestrator | 2025-08-29 15:08:41.634841 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 15:08:41.634847 | orchestrator | Friday 29 August 2025 15:08:31 +0000 (0:00:05.261) 0:03:11.519 ********* 2025-08-29 15:08:41.634854 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:08:41.634860 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:08:41.634870 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:08:41.634880 | orchestrator | 2025-08-29 15:08:41.634891 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:08:41.634902 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:08:41.634913 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:41.634923 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:41.634934 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:08:41.634945 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:41.634955 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:41.634965 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:08:41.634976 | orchestrator | 2025-08-29 15:08:41.634988 | orchestrator | 2025-08-29 15:08:41.634999 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:08:41.635010 | orchestrator | Friday 29 August 2025 15:08:38 +0000 (0:00:06.924) 0:03:18.443 ********* 2025-08-29 15:08:41.635021 | orchestrator | =============================================================================== 2025-08-29 15:08:41.635029 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.89s 2025-08-29 15:08:41.635036 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.77s 2025-08-29 15:08:41.635043 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.52s 2025-08-29 15:08:41.635056 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.04s 2025-08-29 15:08:41.635063 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.81s 2025-08-29 15:08:41.635069 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.79s 2025-08-29 15:08:41.635076 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.52s 2025-08-29 15:08:41.635082 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.51s 2025-08-29 15:08:41.635089 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.92s 2025-08-29 15:08:41.635100 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.26s 2025-08-29 15:08:41.635107 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.25s 2025-08-29 15:08:41.635113 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.58s 2025-08-29 15:08:41.635120 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.26s 2025-08-29 15:08:41.635127 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.69s 2025-08-29 15:08:41.635133 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.06s 2025-08-29 15:08:41.635140 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.24s 2025-08-29 15:08:41.635147 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.53s 2025-08-29 15:08:41.635153 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.37s 2025-08-29 15:08:41.635165 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.20s 2025-08-29 15:08:41.635172 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.03s 2025-08-29 15:08:41.635179 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:41.635186 | orchestrator | 2025-08-29 15:08:41 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:41.635192 | orchestrator | 2025-08-29 15:08:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:44.673456 | orchestrator | 2025-08-29 15:08:44 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:44.677763 | orchestrator | 2025-08-29 15:08:44 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:44.680126 | orchestrator | 2025-08-29 15:08:44 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:44.682153 | orchestrator | 2025-08-29 15:08:44 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:44.682237 | orchestrator | 2025-08-29 15:08:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:47.730979 | orchestrator | 2025-08-29 15:08:47 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:47.732359 | orchestrator | 2025-08-29 15:08:47 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:47.733952 | orchestrator | 2025-08-29 15:08:47 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:47.734644 | orchestrator | 2025-08-29 15:08:47 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:47.734699 | orchestrator | 2025-08-29 15:08:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:50.783534 | orchestrator | 2025-08-29 15:08:50 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:50.785892 | orchestrator | 2025-08-29 15:08:50 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:50.787716 | orchestrator | 2025-08-29 15:08:50 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:50.790186 | orchestrator | 2025-08-29 15:08:50 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:50.790227 | orchestrator | 2025-08-29 15:08:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:53.834203 | orchestrator | 2025-08-29 15:08:53 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:53.836450 | orchestrator | 2025-08-29 15:08:53 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:53.838193 | orchestrator | 2025-08-29 15:08:53 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:53.840015 | orchestrator | 2025-08-29 15:08:53 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:53.840102 | orchestrator | 2025-08-29 15:08:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:56.880565 | orchestrator | 2025-08-29 15:08:56 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:56.883123 | orchestrator | 2025-08-29 15:08:56 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:56.883200 | orchestrator | 2025-08-29 15:08:56 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:56.884703 | orchestrator | 2025-08-29 15:08:56 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:56.884751 | orchestrator | 2025-08-29 15:08:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:59.927489 | orchestrator | 2025-08-29 15:08:59 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:08:59.928348 | orchestrator | 2025-08-29 15:08:59 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:08:59.930739 | orchestrator | 2025-08-29 15:08:59 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:08:59.932496 | orchestrator | 2025-08-29 15:08:59 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:08:59.932520 | orchestrator | 2025-08-29 15:08:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:02.985197 | orchestrator | 2025-08-29 15:09:02 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:02.986171 | orchestrator | 2025-08-29 15:09:02 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:02.987739 | orchestrator | 2025-08-29 15:09:02 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:02.989236 | orchestrator | 2025-08-29 15:09:02 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:02.989444 | orchestrator | 2025-08-29 15:09:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:06.032133 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:06.034578 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:06.037379 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:06.038675 | orchestrator | 2025-08-29 15:09:06 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:06.038705 | orchestrator | 2025-08-29 15:09:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:09.080347 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:09.082092 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:09.084306 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:09.085938 | orchestrator | 2025-08-29 15:09:09 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:09.085985 | orchestrator | 2025-08-29 15:09:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:12.155860 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:12.158398 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:12.160139 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:12.163920 | orchestrator | 2025-08-29 15:09:12 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:12.163984 | orchestrator | 2025-08-29 15:09:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:15.201711 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:15.202400 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:15.203864 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:15.206467 | orchestrator | 2025-08-29 15:09:15 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:15.206507 | orchestrator | 2025-08-29 15:09:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:18.236127 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:18.236377 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:18.237332 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:18.238156 | orchestrator | 2025-08-29 15:09:18 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:18.238184 | orchestrator | 2025-08-29 15:09:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:21.275685 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:21.275945 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:21.276840 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:21.277969 | orchestrator | 2025-08-29 15:09:21 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:21.278158 | orchestrator | 2025-08-29 15:09:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:24.316912 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:24.317328 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:24.318107 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:24.319036 | orchestrator | 2025-08-29 15:09:24 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:24.319079 | orchestrator | 2025-08-29 15:09:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:27.356735 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:27.357480 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:27.359083 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:27.359867 | orchestrator | 2025-08-29 15:09:27 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:27.359922 | orchestrator | 2025-08-29 15:09:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:30.386828 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:30.388412 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:30.388717 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:30.389574 | orchestrator | 2025-08-29 15:09:30 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:30.389620 | orchestrator | 2025-08-29 15:09:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:33.422220 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:33.422759 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:33.423178 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:33.423962 | orchestrator | 2025-08-29 15:09:33 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:33.423994 | orchestrator | 2025-08-29 15:09:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:36.452465 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:36.453506 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:36.458177 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:36.462289 | orchestrator | 2025-08-29 15:09:36 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:36.462394 | orchestrator | 2025-08-29 15:09:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:39.537271 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:39.540304 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:39.540688 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:39.541904 | orchestrator | 2025-08-29 15:09:39 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:39.541947 | orchestrator | 2025-08-29 15:09:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:42.576242 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:42.577651 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:42.578145 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:42.578828 | orchestrator | 2025-08-29 15:09:42 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:42.578845 | orchestrator | 2025-08-29 15:09:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:45.601478 | orchestrator | 2025-08-29 15:09:45 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:45.601570 | orchestrator | 2025-08-29 15:09:45 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:45.602051 | orchestrator | 2025-08-29 15:09:45 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:45.602539 | orchestrator | 2025-08-29 15:09:45 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:45.602555 | orchestrator | 2025-08-29 15:09:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:48.634144 | orchestrator | 2025-08-29 15:09:48 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:48.634845 | orchestrator | 2025-08-29 15:09:48 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:48.635261 | orchestrator | 2025-08-29 15:09:48 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:48.635761 | orchestrator | 2025-08-29 15:09:48 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:48.635788 | orchestrator | 2025-08-29 15:09:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:51.660899 | orchestrator | 2025-08-29 15:09:51 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:51.661861 | orchestrator | 2025-08-29 15:09:51 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:51.663232 | orchestrator | 2025-08-29 15:09:51 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:51.663683 | orchestrator | 2025-08-29 15:09:51 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:51.663708 | orchestrator | 2025-08-29 15:09:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:54.692454 | orchestrator | 2025-08-29 15:09:54 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:54.694734 | orchestrator | 2025-08-29 15:09:54 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:54.699014 | orchestrator | 2025-08-29 15:09:54 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:54.700869 | orchestrator | 2025-08-29 15:09:54 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:54.700936 | orchestrator | 2025-08-29 15:09:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:57.734460 | orchestrator | 2025-08-29 15:09:57 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:09:57.735560 | orchestrator | 2025-08-29 15:09:57 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:09:57.736940 | orchestrator | 2025-08-29 15:09:57 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:09:57.737549 | orchestrator | 2025-08-29 15:09:57 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:09:57.737581 | orchestrator | 2025-08-29 15:09:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:00.766511 | orchestrator | 2025-08-29 15:10:00 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:00.767070 | orchestrator | 2025-08-29 15:10:00 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:10:00.767892 | orchestrator | 2025-08-29 15:10:00 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:00.768236 | orchestrator | 2025-08-29 15:10:00 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:00.768362 | orchestrator | 2025-08-29 15:10:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:03.835051 | orchestrator | 2025-08-29 15:10:03 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:03.838470 | orchestrator | 2025-08-29 15:10:03 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state STARTED 2025-08-29 15:10:03.854451 | orchestrator | 2025-08-29 15:10:03 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:03.857469 | orchestrator | 2025-08-29 15:10:03 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:03.860457 | orchestrator | 2025-08-29 15:10:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:06.924533 | orchestrator | 2025-08-29 15:10:06 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:06.927875 | orchestrator | 2025-08-29 15:10:06 | INFO  | Task f8915d59-bba6-47d7-ac6b-f7b4b0b3f4cc is in state SUCCESS 2025-08-29 15:10:06.929154 | orchestrator | 2025-08-29 15:10:06.929207 | orchestrator | 2025-08-29 15:10:06.929224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:10:06.929240 | orchestrator | 2025-08-29 15:10:06.929253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:10:06.929266 | orchestrator | Friday 29 August 2025 15:05:56 +0000 (0:00:00.786) 0:00:00.786 ********* 2025-08-29 15:10:06.929279 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:10:06.930168 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:10:06.930257 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:10:06.930264 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:10:06.930268 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:10:06.930272 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:10:06.930277 | orchestrator | 2025-08-29 15:10:06.930285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:10:06.930297 | orchestrator | Friday 29 August 2025 15:05:57 +0000 (0:00:01.171) 0:00:01.957 ********* 2025-08-29 15:10:06.930304 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 15:10:06.930311 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 15:10:06.930317 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 15:10:06.930349 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 15:10:06.930355 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 15:10:06.930362 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 15:10:06.930368 | orchestrator | 2025-08-29 15:10:06.930375 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 15:10:06.930382 | orchestrator | 2025-08-29 15:10:06.930389 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:10:06.930397 | orchestrator | Friday 29 August 2025 15:05:58 +0000 (0:00:01.140) 0:00:03.098 ********* 2025-08-29 15:10:06.930403 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:10:06.930411 | orchestrator | 2025-08-29 15:10:06.930418 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 15:10:06.930424 | orchestrator | Friday 29 August 2025 15:06:00 +0000 (0:00:01.387) 0:00:04.486 ********* 2025-08-29 15:10:06.930431 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 15:10:06.930438 | orchestrator | 2025-08-29 15:10:06.930476 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 15:10:06.930484 | orchestrator | Friday 29 August 2025 15:06:03 +0000 (0:00:03.604) 0:00:08.090 ********* 2025-08-29 15:10:06.930492 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 15:10:06.930499 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 15:10:06.930506 | orchestrator | 2025-08-29 15:10:06.930513 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 15:10:06.930519 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:06.599) 0:00:14.690 ********* 2025-08-29 15:10:06.930526 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:10:06.930534 | orchestrator | 2025-08-29 15:10:06.930540 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 15:10:06.930548 | orchestrator | Friday 29 August 2025 15:06:13 +0000 (0:00:03.524) 0:00:18.215 ********* 2025-08-29 15:10:06.930555 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:10:06.930562 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 15:10:06.930569 | orchestrator | 2025-08-29 15:10:06.930576 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 15:10:06.930583 | orchestrator | Friday 29 August 2025 15:06:17 +0000 (0:00:03.982) 0:00:22.198 ********* 2025-08-29 15:10:06.930590 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:10:06.930596 | orchestrator | 2025-08-29 15:10:06.930603 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 15:10:06.930609 | orchestrator | Friday 29 August 2025 15:06:21 +0000 (0:00:03.662) 0:00:25.860 ********* 2025-08-29 15:10:06.930615 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 15:10:06.930622 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 15:10:06.930629 | orchestrator | 2025-08-29 15:10:06.930635 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 15:10:06.930641 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:08.951) 0:00:34.811 ********* 2025-08-29 15:10:06.930742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.930755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.930769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.930777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.930965 | orchestrator | 2025-08-29 15:10:06.930970 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:10:06.930974 | orchestrator | Friday 29 August 2025 15:06:33 +0000 (0:00:02.977) 0:00:37.789 ********* 2025-08-29 15:10:06.930978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.930983 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.930992 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.930996 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.931000 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.931004 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.931008 | orchestrator | 2025-08-29 15:10:06.931012 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:10:06.931016 | orchestrator | Friday 29 August 2025 15:06:34 +0000 (0:00:00.874) 0:00:38.663 ********* 2025-08-29 15:10:06.931020 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.931024 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.931029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.931033 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:10:06.931037 | orchestrator | 2025-08-29 15:10:06.931041 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 15:10:06.931045 | orchestrator | Friday 29 August 2025 15:06:35 +0000 (0:00:01.564) 0:00:40.228 ********* 2025-08-29 15:10:06.931049 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:10:06.931053 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:10:06.931057 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:10:06.931062 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:10:06.931066 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:10:06.931070 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:10:06.931073 | orchestrator | 2025-08-29 15:10:06.931077 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 15:10:06.931081 | orchestrator | Friday 29 August 2025 15:06:37 +0000 (0:00:02.020) 0:00:42.253 ********* 2025-08-29 15:10:06.931087 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:10:06.931091 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:10:06.931115 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:10:06.931124 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:10:06.931128 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:10:06.931133 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:10:06.931137 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:10:06.931158 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:10:06.931168 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:10:06.931173 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:10:06.931178 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:10:06.931182 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:10:06.931186 | orchestrator | 2025-08-29 15:10:06.931196 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 15:10:06.931201 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:04.368) 0:00:46.622 ********* 2025-08-29 15:10:06.931205 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:10:06.931209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:10:06.931213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:10:06.931218 | orchestrator | 2025-08-29 15:10:06.931222 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 15:10:06.931226 | orchestrator | Friday 29 August 2025 15:06:45 +0000 (0:00:02.805) 0:00:49.427 ********* 2025-08-29 15:10:06.931243 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 15:10:06.931248 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 15:10:06.931252 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 15:10:06.931257 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:10:06.931260 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:10:06.931264 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:10:06.931268 | orchestrator | 2025-08-29 15:10:06.931273 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:10:06.931277 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:03.679) 0:00:53.106 ********* 2025-08-29 15:10:06.931281 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:10:06.931285 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:10:06.931289 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:10:06.931293 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:10:06.931297 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:10:06.931301 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:10:06.931305 | orchestrator | 2025-08-29 15:10:06.931309 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 15:10:06.931313 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:01.240) 0:00:54.346 ********* 2025-08-29 15:10:06.931317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.931321 | orchestrator | 2025-08-29 15:10:06.931366 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 15:10:06.931370 | orchestrator | Friday 29 August 2025 15:06:50 +0000 (0:00:00.206) 0:00:54.553 ********* 2025-08-29 15:10:06.931374 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.931378 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.931382 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.931386 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.931390 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.931394 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.931398 | orchestrator | 2025-08-29 15:10:06.931401 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:10:06.931405 | orchestrator | Friday 29 August 2025 15:06:50 +0000 (0:00:00.754) 0:00:55.308 ********* 2025-08-29 15:10:06.931410 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:10:06.931416 | orchestrator | 2025-08-29 15:10:06.931420 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 15:10:06.931424 | orchestrator | Friday 29 August 2025 15:06:53 +0000 (0:00:02.225) 0:00:57.533 ********* 2025-08-29 15:10:06.931428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.931445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.931466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.931485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931537 | orchestrator | 2025-08-29 15:10:06.931541 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 15:10:06.931546 | orchestrator | Friday 29 August 2025 15:06:56 +0000 (0:00:03.632) 0:01:01.166 ********* 2025-08-29 15:10:06.931555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.931560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.931568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.931573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.931587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931591 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.931596 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.931606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931622 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.931629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931652 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.931663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931687 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.931694 | orchestrator | 2025-08-29 15:10:06.931702 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 15:10:06.931709 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:02.011) 0:01:03.177 ********* 2025-08-29 15:10:06.931714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.931724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.931742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.931750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931762 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.931771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.931777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931785 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.931790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931800 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.931807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931821 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.931825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.931845 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.931855 | orchestrator | 2025-08-29 15:10:06.931863 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 15:10:06.931871 | orchestrator | Friday 29 August 2025 15:07:01 +0000 (0:00:02.290) 0:01:05.467 ********* 2025-08-29 15:10:06.931878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.931893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.931906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.931920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.931999 | orchestrator | 2025-08-29 15:10:06.932003 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 15:10:06.932007 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:03.861) 0:01:09.328 ********* 2025-08-29 15:10:06.932011 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:10:06.932018 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.932022 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:10:06.932026 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.932030 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:10:06.932035 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:10:06.932038 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.932043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:10:06.932055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:10:06.932059 | orchestrator | 2025-08-29 15:10:06.932063 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 15:10:06.932067 | orchestrator | Friday 29 August 2025 15:07:07 +0000 (0:00:02.779) 0:01:12.107 ********* 2025-08-29 15:10:06.932071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.932079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.932083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.932091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932185 | orchestrator | 2025-08-29 15:10:06.932191 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 15:10:06.932198 | orchestrator | Friday 29 August 2025 15:07:19 +0000 (0:00:11.960) 0:01:24.068 ********* 2025-08-29 15:10:06.932204 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.932209 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.932215 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.932221 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:10:06.932228 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:10:06.932235 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:10:06.932241 | orchestrator | 2025-08-29 15:10:06.932246 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 15:10:06.932252 | orchestrator | Friday 29 August 2025 15:07:22 +0000 (0:00:02.323) 0:01:26.391 ********* 2025-08-29 15:10:06.932258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.932265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932276 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.932289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.932296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932304 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.932311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:10:06.932317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.932358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932382 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.932392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932405 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.932412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:10:06.932432 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.932439 | orchestrator | 2025-08-29 15:10:06.932445 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 15:10:06.932452 | orchestrator | Friday 29 August 2025 15:07:23 +0000 (0:00:01.028) 0:01:27.419 ********* 2025-08-29 15:10:06.932458 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.932464 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.932470 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.932476 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.932482 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.932495 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.932503 | orchestrator | 2025-08-29 15:10:06.932510 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 15:10:06.932516 | orchestrator | Friday 29 August 2025 15:07:23 +0000 (0:00:00.736) 0:01:28.156 ********* 2025-08-29 15:10:06.932529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.932537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.932544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:06.932557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:06.932642 | orchestrator | 2025-08-29 15:10:06.932649 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:10:06.932655 | orchestrator | Friday 29 August 2025 15:07:26 +0000 (0:00:02.230) 0:01:30.386 ********* 2025-08-29 15:10:06.932662 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.932669 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:06.932675 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:06.932681 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:06.932688 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:06.932694 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:06.932701 | orchestrator | 2025-08-29 15:10:06.932707 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 15:10:06.932713 | orchestrator | Friday 29 August 2025 15:07:26 +0000 (0:00:00.818) 0:01:31.205 ********* 2025-08-29 15:10:06.932719 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:06.932725 | orchestrator | 2025-08-29 15:10:06.932732 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 15:10:06.932737 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:01.868) 0:01:33.073 ********* 2025-08-29 15:10:06.932743 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:06.932756 | orchestrator | 2025-08-29 15:10:06.932763 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 15:10:06.932769 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:02.017) 0:01:35.091 ********* 2025-08-29 15:10:06.932776 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:06.932783 | orchestrator | 2025-08-29 15:10:06.932790 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:10:06.932796 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:19.168) 0:01:54.259 ********* 2025-08-29 15:10:06.932803 | orchestrator | 2025-08-29 15:10:06.932809 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:10:06.932816 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:00.097) 0:01:54.356 ********* 2025-08-29 15:10:06.932822 | orchestrator | 2025-08-29 15:10:06.932829 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:10:06.932835 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.101) 0:01:54.457 ********* 2025-08-29 15:10:06.932843 | orchestrator | 2025-08-29 15:10:06.932849 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:10:06.932856 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.089) 0:01:54.547 ********* 2025-08-29 15:10:06.932862 | orchestrator | 2025-08-29 15:10:06.932868 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:10:06.932875 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.117) 0:01:54.665 ********* 2025-08-29 15:10:06.932881 | orchestrator | 2025-08-29 15:10:06.932887 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:10:06.932894 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.106) 0:01:54.771 ********* 2025-08-29 15:10:06.932900 | orchestrator | 2025-08-29 15:10:06.932907 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 15:10:06.932913 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.093) 0:01:54.865 ********* 2025-08-29 15:10:06.932920 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:06.932927 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:06.932934 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:06.932941 | orchestrator | 2025-08-29 15:10:06.932947 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 15:10:06.932954 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:35.577) 0:02:30.443 ********* 2025-08-29 15:10:06.932960 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:06.932967 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:06.932974 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:06.932981 | orchestrator | 2025-08-29 15:10:06.932987 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 15:10:06.932994 | orchestrator | Friday 29 August 2025 15:08:31 +0000 (0:00:05.643) 0:02:36.086 ********* 2025-08-29 15:10:06.933000 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:10:06.933006 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:10:06.933012 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:10:06.933017 | orchestrator | 2025-08-29 15:10:06.933024 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 15:10:06.933028 | orchestrator | Friday 29 August 2025 15:09:55 +0000 (0:01:23.582) 0:03:59.668 ********* 2025-08-29 15:10:06.933032 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:10:06.933036 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:10:06.933040 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:10:06.933044 | orchestrator | 2025-08-29 15:10:06.933048 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 15:10:06.933052 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:00:08.638) 0:04:08.307 ********* 2025-08-29 15:10:06.933056 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:06.933059 | orchestrator | 2025-08-29 15:10:06.933063 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:10:06.933078 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:10:06.933082 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:10:06.933086 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:10:06.933090 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:10:06.933095 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:10:06.933099 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:10:06.933103 | orchestrator | 2025-08-29 15:10:06.933107 | orchestrator | 2025-08-29 15:10:06.933111 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:10:06.933115 | orchestrator | Friday 29 August 2025 15:10:04 +0000 (0:00:00.786) 0:04:09.093 ********* 2025-08-29 15:10:06.933119 | orchestrator | =============================================================================== 2025-08-29 15:10:06.933123 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 83.58s 2025-08-29 15:10:06.933127 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 35.58s 2025-08-29 15:10:06.933131 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.17s 2025-08-29 15:10:06.933135 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.96s 2025-08-29 15:10:06.933139 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.95s 2025-08-29 15:10:06.933143 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.64s 2025-08-29 15:10:06.933147 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.60s 2025-08-29 15:10:06.933151 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.64s 2025-08-29 15:10:06.933155 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.37s 2025-08-29 15:10:06.933159 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.98s 2025-08-29 15:10:06.933163 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.86s 2025-08-29 15:10:06.933166 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.68s 2025-08-29 15:10:06.933170 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.66s 2025-08-29 15:10:06.933174 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.63s 2025-08-29 15:10:06.933178 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.60s 2025-08-29 15:10:06.933182 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.52s 2025-08-29 15:10:06.933186 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.98s 2025-08-29 15:10:06.933190 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.81s 2025-08-29 15:10:06.933194 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.78s 2025-08-29 15:10:06.933198 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.32s 2025-08-29 15:10:06.933202 | orchestrator | 2025-08-29 15:10:06 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:06.933208 | orchestrator | 2025-08-29 15:10:06 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:06.933214 | orchestrator | 2025-08-29 15:10:06 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:06.933225 | orchestrator | 2025-08-29 15:10:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:09.968618 | orchestrator | 2025-08-29 15:10:09 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:09.969775 | orchestrator | 2025-08-29 15:10:09 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:09.970720 | orchestrator | 2025-08-29 15:10:09 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:09.972043 | orchestrator | 2025-08-29 15:10:09 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:09.972108 | orchestrator | 2025-08-29 15:10:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:13.002360 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:13.010720 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:13.012107 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:13.012932 | orchestrator | 2025-08-29 15:10:13 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:13.013072 | orchestrator | 2025-08-29 15:10:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:16.047639 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:16.047735 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:16.048368 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:16.048748 | orchestrator | 2025-08-29 15:10:16 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:16.048846 | orchestrator | 2025-08-29 15:10:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:19.072593 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:19.072738 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:19.073569 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:19.074150 | orchestrator | 2025-08-29 15:10:19 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:19.074302 | orchestrator | 2025-08-29 15:10:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:22.100419 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:22.100631 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:22.101489 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:22.101906 | orchestrator | 2025-08-29 15:10:22 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:22.102230 | orchestrator | 2025-08-29 15:10:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:25.138669 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:25.139682 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:25.139895 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:25.140730 | orchestrator | 2025-08-29 15:10:25 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:25.140823 | orchestrator | 2025-08-29 15:10:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:28.177656 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:28.178666 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:28.180384 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:28.182799 | orchestrator | 2025-08-29 15:10:28 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:28.182835 | orchestrator | 2025-08-29 15:10:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:31.224474 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:31.226261 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:31.226951 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:31.229344 | orchestrator | 2025-08-29 15:10:31 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:31.229387 | orchestrator | 2025-08-29 15:10:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:34.259973 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state STARTED 2025-08-29 15:10:34.261607 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:34.263497 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:34.265037 | orchestrator | 2025-08-29 15:10:34 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:34.265078 | orchestrator | 2025-08-29 15:10:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:37.304938 | orchestrator | 2025-08-29 15:10:37.305025 | orchestrator | 2025-08-29 15:10:37.305037 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:10:37.305045 | orchestrator | 2025-08-29 15:10:37.305052 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:10:37.305058 | orchestrator | Friday 29 August 2025 15:08:42 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-08-29 15:10:37.305065 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:10:37.305073 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:10:37.305080 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:10:37.305086 | orchestrator | 2025-08-29 15:10:37.305092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:10:37.305099 | orchestrator | Friday 29 August 2025 15:08:42 +0000 (0:00:00.305) 0:00:00.568 ********* 2025-08-29 15:10:37.305106 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 15:10:37.305113 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 15:10:37.305120 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 15:10:37.305127 | orchestrator | 2025-08-29 15:10:37.305133 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 15:10:37.305140 | orchestrator | 2025-08-29 15:10:37.305147 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:10:37.305154 | orchestrator | Friday 29 August 2025 15:08:43 +0000 (0:00:00.527) 0:00:01.096 ********* 2025-08-29 15:10:37.305187 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:10:37.305197 | orchestrator | 2025-08-29 15:10:37.305203 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 15:10:37.305276 | orchestrator | Friday 29 August 2025 15:08:44 +0000 (0:00:00.858) 0:00:01.955 ********* 2025-08-29 15:10:37.305283 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 15:10:37.305287 | orchestrator | 2025-08-29 15:10:37.305291 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 15:10:37.305374 | orchestrator | Friday 29 August 2025 15:08:47 +0000 (0:00:03.237) 0:00:05.192 ********* 2025-08-29 15:10:37.305384 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 15:10:37.305391 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 15:10:37.305398 | orchestrator | 2025-08-29 15:10:37.305404 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 15:10:37.305410 | orchestrator | Friday 29 August 2025 15:08:54 +0000 (0:00:06.815) 0:00:12.008 ********* 2025-08-29 15:10:37.305417 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:10:37.305424 | orchestrator | 2025-08-29 15:10:37.305430 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 15:10:37.305437 | orchestrator | Friday 29 August 2025 15:08:57 +0000 (0:00:03.399) 0:00:15.407 ********* 2025-08-29 15:10:37.305444 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:10:37.305451 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 15:10:37.305457 | orchestrator | 2025-08-29 15:10:37.305463 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 15:10:37.305470 | orchestrator | Friday 29 August 2025 15:09:01 +0000 (0:00:03.921) 0:00:19.329 ********* 2025-08-29 15:10:37.305476 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:10:37.305483 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 15:10:37.305489 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 15:10:37.305496 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 15:10:37.305502 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 15:10:37.305508 | orchestrator | 2025-08-29 15:10:37.305514 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 15:10:37.305520 | orchestrator | Friday 29 August 2025 15:09:17 +0000 (0:00:15.948) 0:00:35.277 ********* 2025-08-29 15:10:37.305527 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 15:10:37.305533 | orchestrator | 2025-08-29 15:10:37.305540 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 15:10:37.305546 | orchestrator | Friday 29 August 2025 15:09:21 +0000 (0:00:04.086) 0:00:39.364 ********* 2025-08-29 15:10:37.305568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.305588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.305600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.305620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305662 | orchestrator | 2025-08-29 15:10:37.305666 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 15:10:37.305670 | orchestrator | Friday 29 August 2025 15:09:23 +0000 (0:00:01.724) 0:00:41.089 ********* 2025-08-29 15:10:37.305674 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 15:10:37.305677 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 15:10:37.305681 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 15:10:37.305685 | orchestrator | 2025-08-29 15:10:37.305688 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 15:10:37.305692 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:01.248) 0:00:42.338 ********* 2025-08-29 15:10:37.305696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.305700 | orchestrator | 2025-08-29 15:10:37.305703 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 15:10:37.305707 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:00.131) 0:00:42.469 ********* 2025-08-29 15:10:37.305711 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.305714 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:37.305718 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:37.305722 | orchestrator | 2025-08-29 15:10:37.305725 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:10:37.305729 | orchestrator | Friday 29 August 2025 15:09:25 +0000 (0:00:00.512) 0:00:42.982 ********* 2025-08-29 15:10:37.305733 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:10:37.305751 | orchestrator | 2025-08-29 15:10:37.305755 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 15:10:37.305758 | orchestrator | Friday 29 August 2025 15:09:26 +0000 (0:00:01.021) 0:00:44.004 ********* 2025-08-29 15:10:37.305764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.305777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.305781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.305785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.305818 | orchestrator | 2025-08-29 15:10:37.305822 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 15:10:37.305826 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:03.787) 0:00:47.791 ********* 2025-08-29 15:10:37.305830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.305834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305848 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.305856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.305860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305868 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:37.305872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.305888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305898 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:37.305904 | orchestrator | 2025-08-29 15:10:37.305914 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 15:10:37.305920 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:00.864) 0:00:48.656 ********* 2025-08-29 15:10:37.305926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.305933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.305950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.305978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.305984 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:37.305990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.305996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.306006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.306055 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:37.306065 | orchestrator | 2025-08-29 15:10:37.306071 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 15:10:37.306077 | orchestrator | Friday 29 August 2025 15:09:32 +0000 (0:00:01.629) 0:00:50.285 ********* 2025-08-29 15:10:37.306085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.306094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-ap2025-08-29 15:10:37 | INFO  | Task f9298c85-fd66-41f8-b342-cac4a96777d7 is in state SUCCESS 2025-08-29 15:10:37.306503 | orchestrator | i:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.306565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.306597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306670 | orchestrator | 2025-08-29 15:10:37.306678 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 15:10:37.306693 | orchestrator | Friday 29 August 2025 15:09:36 +0000 (0:00:04.001) 0:00:54.287 ********* 2025-08-29 15:10:37.306700 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:37.306708 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.306715 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:37.306721 | orchestrator | 2025-08-29 15:10:37.306727 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 15:10:37.306734 | orchestrator | Friday 29 August 2025 15:09:39 +0000 (0:00:02.703) 0:00:56.991 ********* 2025-08-29 15:10:37.306741 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:10:37.306748 | orchestrator | 2025-08-29 15:10:37.306754 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 15:10:37.306761 | orchestrator | Friday 29 August 2025 15:09:41 +0000 (0:00:02.031) 0:00:59.022 ********* 2025-08-29 15:10:37.306768 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.306775 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:37.306782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:37.306789 | orchestrator | 2025-08-29 15:10:37.306795 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 15:10:37.306802 | orchestrator | Friday 29 August 2025 15:09:42 +0000 (0:00:01.385) 0:01:00.407 ********* 2025-08-29 15:10:37.306814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.306830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.306838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.306852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.306906 | orchestrator | 2025-08-29 15:10:37.306912 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 15:10:37.306919 | orchestrator | Friday 29 August 2025 15:09:51 +0000 (0:00:09.092) 0:01:09.500 ********* 2025-08-29 15:10:37.306925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.306932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.306940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.306947 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:37.306963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.306971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.306984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.306992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.307000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:10:37.307007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.307018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:10:37.307026 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:37.307034 | orchestrator | 2025-08-29 15:10:37.307041 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 15:10:37.307049 | orchestrator | Friday 29 August 2025 15:09:52 +0000 (0:00:01.149) 0:01:10.650 ********* 2025-08-29 15:10:37.307063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.307077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.307087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:10:37.307096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.307108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.307123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.307137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.307145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.307152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:10:37.307158 | orchestrator | 2025-08-29 15:10:37.307165 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:10:37.307172 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:03.344) 0:01:13.994 ********* 2025-08-29 15:10:37.307178 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:37.307185 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:37.307192 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:37.307199 | orchestrator | 2025-08-29 15:10:37.307205 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 15:10:37.307211 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.650) 0:01:14.644 ********* 2025-08-29 15:10:37.307218 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.307225 | orchestrator | 2025-08-29 15:10:37.307232 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 15:10:37.307238 | orchestrator | Friday 29 August 2025 15:09:59 +0000 (0:00:02.392) 0:01:17.037 ********* 2025-08-29 15:10:37.307245 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.307251 | orchestrator | 2025-08-29 15:10:37.307258 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 15:10:37.307265 | orchestrator | Friday 29 August 2025 15:10:01 +0000 (0:00:02.459) 0:01:19.496 ********* 2025-08-29 15:10:37.307271 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.307278 | orchestrator | 2025-08-29 15:10:37.307285 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:10:37.307295 | orchestrator | Friday 29 August 2025 15:10:13 +0000 (0:00:12.123) 0:01:31.620 ********* 2025-08-29 15:10:37.307354 | orchestrator | 2025-08-29 15:10:37.307364 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:10:37.307371 | orchestrator | Friday 29 August 2025 15:10:14 +0000 (0:00:00.074) 0:01:31.694 ********* 2025-08-29 15:10:37.307388 | orchestrator | 2025-08-29 15:10:37.307395 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:10:37.307403 | orchestrator | Friday 29 August 2025 15:10:14 +0000 (0:00:00.070) 0:01:31.765 ********* 2025-08-29 15:10:37.307409 | orchestrator | 2025-08-29 15:10:37.307416 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 15:10:37.307423 | orchestrator | Friday 29 August 2025 15:10:14 +0000 (0:00:00.086) 0:01:31.852 ********* 2025-08-29 15:10:37.307430 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.307436 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:37.307443 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:37.307450 | orchestrator | 2025-08-29 15:10:37.307457 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 15:10:37.307464 | orchestrator | Friday 29 August 2025 15:10:20 +0000 (0:00:06.739) 0:01:38.591 ********* 2025-08-29 15:10:37.307471 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.307479 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:37.307495 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:37.307503 | orchestrator | 2025-08-29 15:10:37.307510 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 15:10:37.307517 | orchestrator | Friday 29 August 2025 15:10:26 +0000 (0:00:05.583) 0:01:44.174 ********* 2025-08-29 15:10:37.307524 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:37.307531 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:37.307538 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:37.307545 | orchestrator | 2025-08-29 15:10:37.307552 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:10:37.307560 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:10:37.307569 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:10:37.307576 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:10:37.307583 | orchestrator | 2025-08-29 15:10:37.307590 | orchestrator | 2025-08-29 15:10:37.307597 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:10:37.307604 | orchestrator | Friday 29 August 2025 15:10:37 +0000 (0:00:10.503) 0:01:54.677 ********* 2025-08-29 15:10:37.307611 | orchestrator | =============================================================================== 2025-08-29 15:10:37.307619 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.95s 2025-08-29 15:10:37.307626 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.12s 2025-08-29 15:10:37.307633 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.50s 2025-08-29 15:10:37.307640 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.09s 2025-08-29 15:10:37.307647 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.82s 2025-08-29 15:10:37.307656 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.74s 2025-08-29 15:10:37.307663 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.58s 2025-08-29 15:10:37.307670 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.09s 2025-08-29 15:10:37.307678 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.00s 2025-08-29 15:10:37.307685 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.92s 2025-08-29 15:10:37.307693 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.79s 2025-08-29 15:10:37.307700 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.40s 2025-08-29 15:10:37.307715 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.34s 2025-08-29 15:10:37.307722 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.24s 2025-08-29 15:10:37.307729 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.70s 2025-08-29 15:10:37.307736 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.46s 2025-08-29 15:10:37.307743 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.39s 2025-08-29 15:10:37.307749 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.03s 2025-08-29 15:10:37.307756 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.72s 2025-08-29 15:10:37.307763 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.63s 2025-08-29 15:10:37.307771 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:37.307779 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:37.307786 | orchestrator | 2025-08-29 15:10:37 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:37.307794 | orchestrator | 2025-08-29 15:10:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:40.350748 | orchestrator | 2025-08-29 15:10:40 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:40.350991 | orchestrator | 2025-08-29 15:10:40 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:40.354091 | orchestrator | 2025-08-29 15:10:40 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:40.356702 | orchestrator | 2025-08-29 15:10:40 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:40.356758 | orchestrator | 2025-08-29 15:10:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:43.395484 | orchestrator | 2025-08-29 15:10:43 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:43.395899 | orchestrator | 2025-08-29 15:10:43 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:43.398451 | orchestrator | 2025-08-29 15:10:43 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:43.398950 | orchestrator | 2025-08-29 15:10:43 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:43.398964 | orchestrator | 2025-08-29 15:10:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:46.468845 | orchestrator | 2025-08-29 15:10:46 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:46.469034 | orchestrator | 2025-08-29 15:10:46 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:46.469523 | orchestrator | 2025-08-29 15:10:46 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:46.470227 | orchestrator | 2025-08-29 15:10:46 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:46.470267 | orchestrator | 2025-08-29 15:10:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:49.504418 | orchestrator | 2025-08-29 15:10:49 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:49.505018 | orchestrator | 2025-08-29 15:10:49 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:49.505291 | orchestrator | 2025-08-29 15:10:49 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:49.506138 | orchestrator | 2025-08-29 15:10:49 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:49.506170 | orchestrator | 2025-08-29 15:10:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:52.543440 | orchestrator | 2025-08-29 15:10:52 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:52.543736 | orchestrator | 2025-08-29 15:10:52 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:52.544468 | orchestrator | 2025-08-29 15:10:52 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:52.545696 | orchestrator | 2025-08-29 15:10:52 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:52.545719 | orchestrator | 2025-08-29 15:10:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:55.582508 | orchestrator | 2025-08-29 15:10:55 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:55.583497 | orchestrator | 2025-08-29 15:10:55 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:55.583748 | orchestrator | 2025-08-29 15:10:55 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:55.583881 | orchestrator | 2025-08-29 15:10:55 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:55.583902 | orchestrator | 2025-08-29 15:10:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:58.611763 | orchestrator | 2025-08-29 15:10:58 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:10:58.611896 | orchestrator | 2025-08-29 15:10:58 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:10:58.612400 | orchestrator | 2025-08-29 15:10:58 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:10:58.613087 | orchestrator | 2025-08-29 15:10:58 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:10:58.613127 | orchestrator | 2025-08-29 15:10:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:01.635581 | orchestrator | 2025-08-29 15:11:01 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:01.635710 | orchestrator | 2025-08-29 15:11:01 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:01.635973 | orchestrator | 2025-08-29 15:11:01 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:01.636493 | orchestrator | 2025-08-29 15:11:01 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:01.636516 | orchestrator | 2025-08-29 15:11:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:04.676542 | orchestrator | 2025-08-29 15:11:04 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:04.677182 | orchestrator | 2025-08-29 15:11:04 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:04.679755 | orchestrator | 2025-08-29 15:11:04 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:04.681698 | orchestrator | 2025-08-29 15:11:04 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:04.681744 | orchestrator | 2025-08-29 15:11:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:07.706695 | orchestrator | 2025-08-29 15:11:07 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:07.707610 | orchestrator | 2025-08-29 15:11:07 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:07.708418 | orchestrator | 2025-08-29 15:11:07 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:07.709382 | orchestrator | 2025-08-29 15:11:07 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:07.709440 | orchestrator | 2025-08-29 15:11:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:10.738765 | orchestrator | 2025-08-29 15:11:10 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:10.739392 | orchestrator | 2025-08-29 15:11:10 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:10.740520 | orchestrator | 2025-08-29 15:11:10 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:10.741360 | orchestrator | 2025-08-29 15:11:10 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:10.741397 | orchestrator | 2025-08-29 15:11:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:13.776560 | orchestrator | 2025-08-29 15:11:13 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:13.778156 | orchestrator | 2025-08-29 15:11:13 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:13.779909 | orchestrator | 2025-08-29 15:11:13 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:13.781370 | orchestrator | 2025-08-29 15:11:13 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:13.782117 | orchestrator | 2025-08-29 15:11:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:16.816533 | orchestrator | 2025-08-29 15:11:16 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:16.817873 | orchestrator | 2025-08-29 15:11:16 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:16.819638 | orchestrator | 2025-08-29 15:11:16 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:16.821024 | orchestrator | 2025-08-29 15:11:16 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:16.821089 | orchestrator | 2025-08-29 15:11:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:19.855225 | orchestrator | 2025-08-29 15:11:19 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:19.855712 | orchestrator | 2025-08-29 15:11:19 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:19.856804 | orchestrator | 2025-08-29 15:11:19 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:19.857856 | orchestrator | 2025-08-29 15:11:19 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:19.857904 | orchestrator | 2025-08-29 15:11:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:22.887722 | orchestrator | 2025-08-29 15:11:22 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:22.887859 | orchestrator | 2025-08-29 15:11:22 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:22.888327 | orchestrator | 2025-08-29 15:11:22 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:22.888842 | orchestrator | 2025-08-29 15:11:22 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:22.888899 | orchestrator | 2025-08-29 15:11:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:25.919164 | orchestrator | 2025-08-29 15:11:25 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:25.920309 | orchestrator | 2025-08-29 15:11:25 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:25.921897 | orchestrator | 2025-08-29 15:11:25 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:25.922486 | orchestrator | 2025-08-29 15:11:25 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:25.922520 | orchestrator | 2025-08-29 15:11:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:28.946600 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:28.946766 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:28.949391 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:28.949445 | orchestrator | 2025-08-29 15:11:28 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:28.949451 | orchestrator | 2025-08-29 15:11:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:31.985327 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:31.985486 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:31.986191 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:31.987322 | orchestrator | 2025-08-29 15:11:31 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:31.987370 | orchestrator | 2025-08-29 15:11:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:35.030546 | orchestrator | 2025-08-29 15:11:35 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:35.030686 | orchestrator | 2025-08-29 15:11:35 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:35.031798 | orchestrator | 2025-08-29 15:11:35 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:35.033374 | orchestrator | 2025-08-29 15:11:35 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:35.033427 | orchestrator | 2025-08-29 15:11:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:38.110929 | orchestrator | 2025-08-29 15:11:38 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:38.111212 | orchestrator | 2025-08-29 15:11:38 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:38.112200 | orchestrator | 2025-08-29 15:11:38 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:38.113084 | orchestrator | 2025-08-29 15:11:38 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:38.113140 | orchestrator | 2025-08-29 15:11:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:41.153152 | orchestrator | 2025-08-29 15:11:41 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:41.155542 | orchestrator | 2025-08-29 15:11:41 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:41.157892 | orchestrator | 2025-08-29 15:11:41 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:41.160186 | orchestrator | 2025-08-29 15:11:41 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:41.160395 | orchestrator | 2025-08-29 15:11:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:44.190143 | orchestrator | 2025-08-29 15:11:44 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:44.190356 | orchestrator | 2025-08-29 15:11:44 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:44.191126 | orchestrator | 2025-08-29 15:11:44 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:44.193782 | orchestrator | 2025-08-29 15:11:44 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:44.193835 | orchestrator | 2025-08-29 15:11:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:47.231457 | orchestrator | 2025-08-29 15:11:47 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state STARTED 2025-08-29 15:11:47.231591 | orchestrator | 2025-08-29 15:11:47 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:47.232147 | orchestrator | 2025-08-29 15:11:47 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:47.232749 | orchestrator | 2025-08-29 15:11:47 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:47.232781 | orchestrator | 2025-08-29 15:11:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:50.259402 | orchestrator | 2025-08-29 15:11:50 | INFO  | Task e2500012-127f-411b-a8f3-7bd8de41a77c is in state SUCCESS 2025-08-29 15:11:50.259521 | orchestrator | 2025-08-29 15:11:50 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:11:50.260736 | orchestrator | 2025-08-29 15:11:50 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:50.260774 | orchestrator | 2025-08-29 15:11:50 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:50.261618 | orchestrator | 2025-08-29 15:11:50 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:50.261675 | orchestrator | 2025-08-29 15:11:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:53.286719 | orchestrator | 2025-08-29 15:11:53 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:11:53.286850 | orchestrator | 2025-08-29 15:11:53 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:53.287285 | orchestrator | 2025-08-29 15:11:53 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:53.288913 | orchestrator | 2025-08-29 15:11:53 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:53.289006 | orchestrator | 2025-08-29 15:11:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:56.350739 | orchestrator | 2025-08-29 15:11:56 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:11:56.350846 | orchestrator | 2025-08-29 15:11:56 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:56.351450 | orchestrator | 2025-08-29 15:11:56 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:56.351748 | orchestrator | 2025-08-29 15:11:56 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:56.351782 | orchestrator | 2025-08-29 15:11:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:59.376867 | orchestrator | 2025-08-29 15:11:59 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:11:59.378099 | orchestrator | 2025-08-29 15:11:59 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:11:59.378582 | orchestrator | 2025-08-29 15:11:59 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:11:59.379111 | orchestrator | 2025-08-29 15:11:59 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:11:59.379146 | orchestrator | 2025-08-29 15:11:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:02.408565 | orchestrator | 2025-08-29 15:12:02 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:02.410972 | orchestrator | 2025-08-29 15:12:02 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:02.413205 | orchestrator | 2025-08-29 15:12:02 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:02.414961 | orchestrator | 2025-08-29 15:12:02 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:02.415017 | orchestrator | 2025-08-29 15:12:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:05.464091 | orchestrator | 2025-08-29 15:12:05 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:05.465187 | orchestrator | 2025-08-29 15:12:05 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:05.465921 | orchestrator | 2025-08-29 15:12:05 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:05.466983 | orchestrator | 2025-08-29 15:12:05 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:05.467003 | orchestrator | 2025-08-29 15:12:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:08.492812 | orchestrator | 2025-08-29 15:12:08 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:08.493017 | orchestrator | 2025-08-29 15:12:08 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:08.493600 | orchestrator | 2025-08-29 15:12:08 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:08.494472 | orchestrator | 2025-08-29 15:12:08 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:08.494515 | orchestrator | 2025-08-29 15:12:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:11.544776 | orchestrator | 2025-08-29 15:12:11 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:11.546338 | orchestrator | 2025-08-29 15:12:11 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:11.548016 | orchestrator | 2025-08-29 15:12:11 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:11.549545 | orchestrator | 2025-08-29 15:12:11 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:11.549602 | orchestrator | 2025-08-29 15:12:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:14.595870 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:14.598187 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:14.600813 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:14.602820 | orchestrator | 2025-08-29 15:12:14 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:14.603050 | orchestrator | 2025-08-29 15:12:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:17.650151 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:17.651939 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:17.653036 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:17.654506 | orchestrator | 2025-08-29 15:12:17 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:17.654540 | orchestrator | 2025-08-29 15:12:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:20.694449 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:20.695798 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:20.697693 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:20.698521 | orchestrator | 2025-08-29 15:12:20 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:20.698588 | orchestrator | 2025-08-29 15:12:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:23.742780 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:23.742892 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:23.743877 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:23.744448 | orchestrator | 2025-08-29 15:12:23 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:23.744522 | orchestrator | 2025-08-29 15:12:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:26.787709 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:26.787807 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:26.791819 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:26.792068 | orchestrator | 2025-08-29 15:12:26 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:26.792160 | orchestrator | 2025-08-29 15:12:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:29.815950 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:29.816077 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:29.816425 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:29.818771 | orchestrator | 2025-08-29 15:12:29 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:29.818852 | orchestrator | 2025-08-29 15:12:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:32.857743 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:32.858964 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:32.860663 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:32.861075 | orchestrator | 2025-08-29 15:12:32 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:32.861087 | orchestrator | 2025-08-29 15:12:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:35.906291 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:35.907779 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:35.908659 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:35.912534 | orchestrator | 2025-08-29 15:12:35 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:35.912598 | orchestrator | 2025-08-29 15:12:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:38.949029 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:38.950992 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:38.953192 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:38.954791 | orchestrator | 2025-08-29 15:12:38 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:38.954836 | orchestrator | 2025-08-29 15:12:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:41.997127 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:41.998047 | orchestrator | 2025-08-29 15:12:41 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:41.998870 | orchestrator | 2025-08-29 15:12:42 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:42.000044 | orchestrator | 2025-08-29 15:12:42 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:42.000074 | orchestrator | 2025-08-29 15:12:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:45.041309 | orchestrator | 2025-08-29 15:12:45 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:45.044037 | orchestrator | 2025-08-29 15:12:45 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:45.045409 | orchestrator | 2025-08-29 15:12:45 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:45.049744 | orchestrator | 2025-08-29 15:12:45 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:45.049786 | orchestrator | 2025-08-29 15:12:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:48.102267 | orchestrator | 2025-08-29 15:12:48 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:48.105380 | orchestrator | 2025-08-29 15:12:48 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:48.107909 | orchestrator | 2025-08-29 15:12:48 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:48.110201 | orchestrator | 2025-08-29 15:12:48 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:48.110262 | orchestrator | 2025-08-29 15:12:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:51.159544 | orchestrator | 2025-08-29 15:12:51 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:51.160437 | orchestrator | 2025-08-29 15:12:51 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:51.163089 | orchestrator | 2025-08-29 15:12:51 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:51.165799 | orchestrator | 2025-08-29 15:12:51 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state STARTED 2025-08-29 15:12:51.165840 | orchestrator | 2025-08-29 15:12:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:54.227909 | orchestrator | 2025-08-29 15:12:54 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:54.229253 | orchestrator | 2025-08-29 15:12:54 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:54.231120 | orchestrator | 2025-08-29 15:12:54 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:54.234877 | orchestrator | 2025-08-29 15:12:54 | INFO  | Task 47849012-0385-4036-9e2f-b0e13dcb273b is in state SUCCESS 2025-08-29 15:12:54.236284 | orchestrator | 2025-08-29 15:12:54.236333 | orchestrator | 2025-08-29 15:12:54.236342 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 15:12:54.236351 | orchestrator | 2025-08-29 15:12:54.236357 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 15:12:54.236365 | orchestrator | Friday 29 August 2025 15:10:45 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-08-29 15:12:54.236371 | orchestrator | changed: [localhost] 2025-08-29 15:12:54.236380 | orchestrator | 2025-08-29 15:12:54.236386 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 15:12:54.236392 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:00.963) 0:00:01.240 ********* 2025-08-29 15:12:54.236399 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-08-29 15:12:54.236405 | orchestrator | changed: [localhost] 2025-08-29 15:12:54.236424 | orchestrator | 2025-08-29 15:12:54.236431 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 15:12:54.236437 | orchestrator | Friday 29 August 2025 15:11:42 +0000 (0:00:55.585) 0:00:56.825 ********* 2025-08-29 15:12:54.236444 | orchestrator | changed: [localhost] 2025-08-29 15:12:54.236450 | orchestrator | 2025-08-29 15:12:54.236456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:12:54.236463 | orchestrator | 2025-08-29 15:12:54.236469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:12:54.236476 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:04.541) 0:01:01.367 ********* 2025-08-29 15:12:54.236482 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:12:54.236489 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:12:54.236496 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:12:54.236503 | orchestrator | 2025-08-29 15:12:54.236511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:12:54.236518 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:00.243) 0:01:01.611 ********* 2025-08-29 15:12:54.236524 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 15:12:54.236531 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 15:12:54.236540 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 15:12:54.236546 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 15:12:54.236553 | orchestrator | 2025-08-29 15:12:54.236561 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 15:12:54.236567 | orchestrator | skipping: no hosts matched 2025-08-29 15:12:54.236574 | orchestrator | 2025-08-29 15:12:54.236581 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:12:54.236588 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:12:54.236764 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:12:54.236777 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:12:54.236784 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:12:54.236791 | orchestrator | 2025-08-29 15:12:54.236798 | orchestrator | 2025-08-29 15:12:54.236805 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:12:54.236812 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:00.527) 0:01:02.138 ********* 2025-08-29 15:12:54.236820 | orchestrator | =============================================================================== 2025-08-29 15:12:54.236826 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 55.59s 2025-08-29 15:12:54.236833 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.54s 2025-08-29 15:12:54.236841 | orchestrator | Ensure the destination directory exists --------------------------------- 0.96s 2025-08-29 15:12:54.236847 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-08-29 15:12:54.236855 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2025-08-29 15:12:54.236884 | orchestrator | 2025-08-29 15:12:54.236891 | orchestrator | 2025-08-29 15:12:54.236898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:12:54.236905 | orchestrator | 2025-08-29 15:12:54.236911 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:12:54.236919 | orchestrator | Friday 29 August 2025 15:08:31 +0000 (0:00:00.291) 0:00:00.291 ********* 2025-08-29 15:12:54.236926 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:12:54.236933 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:12:54.236940 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:12:54.236947 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:12:54.236953 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:12:54.236959 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:12:54.236965 | orchestrator | 2025-08-29 15:12:54.236971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:12:54.237035 | orchestrator | Friday 29 August 2025 15:08:32 +0000 (0:00:01.320) 0:00:01.611 ********* 2025-08-29 15:12:54.237044 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 15:12:54.237052 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 15:12:54.237058 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 15:12:54.237065 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 15:12:54.237071 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 15:12:54.237078 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 15:12:54.237085 | orchestrator | 2025-08-29 15:12:54.237091 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 15:12:54.237097 | orchestrator | 2025-08-29 15:12:54.237104 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:12:54.237123 | orchestrator | Friday 29 August 2025 15:08:33 +0000 (0:00:01.286) 0:00:02.898 ********* 2025-08-29 15:12:54.237130 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:12:54.237137 | orchestrator | 2025-08-29 15:12:54.237144 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 15:12:54.237150 | orchestrator | Friday 29 August 2025 15:08:35 +0000 (0:00:01.827) 0:00:04.725 ********* 2025-08-29 15:12:54.237157 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:12:54.237163 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:12:54.237170 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:12:54.237177 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:12:54.237192 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:12:54.237221 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:12:54.237228 | orchestrator | 2025-08-29 15:12:54.237234 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 15:12:54.237241 | orchestrator | Friday 29 August 2025 15:08:36 +0000 (0:00:01.127) 0:00:05.853 ********* 2025-08-29 15:12:54.237247 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:12:54.237254 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:12:54.237260 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:12:54.237267 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:12:54.237274 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:12:54.237281 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:12:54.237288 | orchestrator | 2025-08-29 15:12:54.237295 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 15:12:54.237302 | orchestrator | Friday 29 August 2025 15:08:38 +0000 (0:00:01.241) 0:00:07.095 ********* 2025-08-29 15:12:54.237309 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 15:12:54.237316 | orchestrator |  "changed": false, 2025-08-29 15:12:54.237323 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:12:54.237330 | orchestrator | } 2025-08-29 15:12:54.237337 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 15:12:54.237343 | orchestrator |  "changed": false, 2025-08-29 15:12:54.237350 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:12:54.237357 | orchestrator | } 2025-08-29 15:12:54.237364 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 15:12:54.237371 | orchestrator |  "changed": false, 2025-08-29 15:12:54.237377 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:12:54.237383 | orchestrator | } 2025-08-29 15:12:54.237390 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 15:12:54.237397 | orchestrator |  "changed": false, 2025-08-29 15:12:54.237404 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:12:54.237411 | orchestrator | } 2025-08-29 15:12:54.237418 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 15:12:54.237425 | orchestrator |  "changed": false, 2025-08-29 15:12:54.237432 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:12:54.237438 | orchestrator | } 2025-08-29 15:12:54.237444 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 15:12:54.237451 | orchestrator |  "changed": false, 2025-08-29 15:12:54.237458 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:12:54.237464 | orchestrator | } 2025-08-29 15:12:54.237471 | orchestrator | 2025-08-29 15:12:54.237478 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 15:12:54.237486 | orchestrator | Friday 29 August 2025 15:08:38 +0000 (0:00:00.663) 0:00:07.759 ********* 2025-08-29 15:12:54.237492 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.237499 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.237505 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.237512 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.237519 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.237525 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.237531 | orchestrator | 2025-08-29 15:12:54.237538 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 15:12:54.237545 | orchestrator | Friday 29 August 2025 15:08:39 +0000 (0:00:00.674) 0:00:08.434 ********* 2025-08-29 15:12:54.237552 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 15:12:54.237560 | orchestrator | 2025-08-29 15:12:54.237566 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 15:12:54.237574 | orchestrator | Friday 29 August 2025 15:08:43 +0000 (0:00:03.786) 0:00:12.220 ********* 2025-08-29 15:12:54.237586 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 15:12:54.237594 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 15:12:54.237601 | orchestrator | 2025-08-29 15:12:54.237608 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 15:12:54.237622 | orchestrator | Friday 29 August 2025 15:08:49 +0000 (0:00:06.209) 0:00:18.429 ********* 2025-08-29 15:12:54.237630 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:12:54.237637 | orchestrator | 2025-08-29 15:12:54.237642 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 15:12:54.237649 | orchestrator | Friday 29 August 2025 15:08:52 +0000 (0:00:03.546) 0:00:21.976 ********* 2025-08-29 15:12:54.237654 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:12:54.237660 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 15:12:54.237667 | orchestrator | 2025-08-29 15:12:54.237674 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 15:12:54.237680 | orchestrator | Friday 29 August 2025 15:08:56 +0000 (0:00:04.031) 0:00:26.007 ********* 2025-08-29 15:12:54.237686 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:12:54.237692 | orchestrator | 2025-08-29 15:12:54.237698 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 15:12:54.237705 | orchestrator | Friday 29 August 2025 15:09:00 +0000 (0:00:03.934) 0:00:29.941 ********* 2025-08-29 15:12:54.237711 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 15:12:54.237718 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 15:12:54.237724 | orchestrator | 2025-08-29 15:12:54.237731 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:12:54.237736 | orchestrator | Friday 29 August 2025 15:09:08 +0000 (0:00:07.808) 0:00:37.750 ********* 2025-08-29 15:12:54.237751 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.237758 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.237765 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.237771 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.237777 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.237784 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.237790 | orchestrator | 2025-08-29 15:12:54.237796 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 15:12:54.237803 | orchestrator | Friday 29 August 2025 15:09:09 +0000 (0:00:00.778) 0:00:38.529 ********* 2025-08-29 15:12:54.237810 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.237816 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.237823 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.237830 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.237836 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.237843 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.237849 | orchestrator | 2025-08-29 15:12:54.237855 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 15:12:54.237861 | orchestrator | Friday 29 August 2025 15:09:11 +0000 (0:00:02.455) 0:00:40.984 ********* 2025-08-29 15:12:54.237867 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:12:54.237874 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:12:54.237880 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:12:54.237887 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:12:54.237893 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:12:54.237900 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:12:54.237906 | orchestrator | 2025-08-29 15:12:54.237913 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 15:12:54.237919 | orchestrator | Friday 29 August 2025 15:09:13 +0000 (0:00:01.125) 0:00:42.109 ********* 2025-08-29 15:12:54.237925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.237931 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.237938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.237945 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.237950 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.237956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.237962 | orchestrator | 2025-08-29 15:12:54.237968 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 15:12:54.237982 | orchestrator | Friday 29 August 2025 15:09:15 +0000 (0:00:02.180) 0:00:44.290 ********* 2025-08-29 15:12:54.237992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238124 | orchestrator | 2025-08-29 15:12:54.238131 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 15:12:54.238138 | orchestrator | Friday 29 August 2025 15:09:18 +0000 (0:00:02.790) 0:00:47.081 ********* 2025-08-29 15:12:54.238145 | orchestrator | [WARNING]: Skipped 2025-08-29 15:12:54.238153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 15:12:54.238161 | orchestrator | due to this access issue: 2025-08-29 15:12:54.238182 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 15:12:54.238189 | orchestrator | a directory 2025-08-29 15:12:54.238196 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:12:54.238254 | orchestrator | 2025-08-29 15:12:54.238262 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:12:54.238269 | orchestrator | Friday 29 August 2025 15:09:18 +0000 (0:00:00.764) 0:00:47.845 ********* 2025-08-29 15:12:54.238277 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:12:54.238285 | orchestrator | 2025-08-29 15:12:54.238292 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 15:12:54.238299 | orchestrator | Friday 29 August 2025 15:09:19 +0000 (0:00:01.066) 0:00:48.912 ********* 2025-08-29 15:12:54.238312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238375 | orchestrator | 2025-08-29 15:12:54.238382 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 15:12:54.238388 | orchestrator | Friday 29 August 2025 15:09:22 +0000 (0:00:02.697) 0:00:51.610 ********* 2025-08-29 15:12:54.238395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238407 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.238414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.238431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.238445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238452 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.238464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.238483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238490 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.238496 | orchestrator | 2025-08-29 15:12:54.238501 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 15:12:54.238507 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:02.360) 0:00:53.970 ********* 2025-08-29 15:12:54.238513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.238531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238538 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.238551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238564 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.238571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238584 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.238590 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.238600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238607 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.238612 | orchestrator | 2025-08-29 15:12:54.238619 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 15:12:54.238625 | orchestrator | Friday 29 August 2025 15:09:27 +0000 (0:00:02.973) 0:00:56.944 ********* 2025-08-29 15:12:54.238631 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.238636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.238644 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.238651 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.238657 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.238663 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.238669 | orchestrator | 2025-08-29 15:12:54.238675 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 15:12:54.238681 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:02.331) 0:00:59.276 ********* 2025-08-29 15:12:54.238693 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.238700 | orchestrator | 2025-08-29 15:12:54.238705 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 15:12:54.238711 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:00.124) 0:00:59.400 ********* 2025-08-29 15:12:54.238717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.238724 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.238729 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.238735 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.238741 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.238748 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.238755 | orchestrator | 2025-08-29 15:12:54.238761 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 15:12:54.238767 | orchestrator | Friday 29 August 2025 15:09:31 +0000 (0:00:00.732) 0:01:00.133 ********* 2025-08-29 15:12:54.238781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.238796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238803 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.238809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238815 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.238827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.238839 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.238852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238860 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.238867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.238874 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.238881 | orchestrator | 2025-08-29 15:12:54.238888 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 15:12:54.238895 | orchestrator | Friday 29 August 2025 15:09:33 +0000 (0:00:02.618) 0:01:02.751 ********* 2025-08-29 15:12:54.238902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.238939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.238960 | orchestrator | 2025-08-29 15:12:54.238966 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 15:12:54.238972 | orchestrator | Friday 29 August 2025 15:09:37 +0000 (0:00:03.706) 0:01:06.458 ********* 2025-08-29 15:12:54.238982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.239056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239063 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.239079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.239085 | orchestrator | 2025-08-29 15:12:54.239090 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 15:12:54.239095 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:07.067) 0:01:13.526 ********* 2025-08-29 15:12:54.239106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239124 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239172 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239179 | orchestrator | 2025-08-29 15:12:54.239184 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 15:12:54.239191 | orchestrator | Friday 29 August 2025 15:09:48 +0000 (0:00:03.631) 0:01:17.158 ********* 2025-08-29 15:12:54.239219 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:12:54.239228 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239234 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239241 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239246 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:12:54.239253 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:54.239259 | orchestrator | 2025-08-29 15:12:54.239265 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 15:12:54.239272 | orchestrator | Friday 29 August 2025 15:09:51 +0000 (0:00:02.965) 0:01:20.123 ********* 2025-08-29 15:12:54.239279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239291 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239306 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239325 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.239362 | orchestrator | 2025-08-29 15:12:54.239369 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 15:12:54.239375 | orchestrator | Friday 29 August 2025 15:09:55 +0000 (0:00:04.145) 0:01:24.269 ********* 2025-08-29 15:12:54.239382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239389 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239402 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239408 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239415 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239421 | orchestrator | 2025-08-29 15:12:54.239428 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 15:12:54.239433 | orchestrator | Friday 29 August 2025 15:09:58 +0000 (0:00:03.263) 0:01:27.533 ********* 2025-08-29 15:12:54.239439 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239452 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239457 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239464 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239470 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239476 | orchestrator | 2025-08-29 15:12:54.239483 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 15:12:54.239493 | orchestrator | Friday 29 August 2025 15:10:01 +0000 (0:00:03.205) 0:01:30.739 ********* 2025-08-29 15:12:54.239500 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239506 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239513 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239519 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239525 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239532 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239538 | orchestrator | 2025-08-29 15:12:54.239544 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 15:12:54.239550 | orchestrator | Friday 29 August 2025 15:10:03 +0000 (0:00:02.256) 0:01:32.995 ********* 2025-08-29 15:12:54.239556 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239563 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239568 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239575 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239581 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239588 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239594 | orchestrator | 2025-08-29 15:12:54.239599 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 15:12:54.239605 | orchestrator | Friday 29 August 2025 15:10:07 +0000 (0:00:03.200) 0:01:36.195 ********* 2025-08-29 15:12:54.239612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239619 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239625 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239631 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239637 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239644 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239651 | orchestrator | 2025-08-29 15:12:54.239657 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 15:12:54.239663 | orchestrator | Friday 29 August 2025 15:10:09 +0000 (0:00:02.773) 0:01:38.969 ********* 2025-08-29 15:12:54.239675 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239695 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239708 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239714 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239721 | orchestrator | 2025-08-29 15:12:54.239727 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 15:12:54.239733 | orchestrator | Friday 29 August 2025 15:10:12 +0000 (0:00:03.003) 0:01:41.972 ********* 2025-08-29 15:12:54.239739 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:12:54.239747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239754 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:12:54.239760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239766 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:12:54.239773 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239779 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:12:54.239786 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239792 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:12:54.239799 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239806 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:12:54.239812 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239819 | orchestrator | 2025-08-29 15:12:54.239825 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 15:12:54.239833 | orchestrator | Friday 29 August 2025 15:10:15 +0000 (0:00:03.009) 0:01:44.981 ********* 2025-08-29 15:12:54.239840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.239848 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.239861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.239875 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.239895 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.239903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239910 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.239916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239923 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.239929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.239936 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.239943 | orchestrator | 2025-08-29 15:12:54.239949 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 15:12:54.239955 | orchestrator | Friday 29 August 2025 15:10:18 +0000 (0:00:02.144) 0:01:47.125 ********* 2025-08-29 15:12:54.239966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.239982 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.239996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.240004 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.240018 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.240032 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.240062 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.240076 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240083 | orchestrator | 2025-08-29 15:12:54.240090 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 15:12:54.240097 | orchestrator | Friday 29 August 2025 15:10:20 +0000 (0:00:02.686) 0:01:49.811 ********* 2025-08-29 15:12:54.240103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240109 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240122 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240129 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240140 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240146 | orchestrator | 2025-08-29 15:12:54.240153 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 15:12:54.240160 | orchestrator | Friday 29 August 2025 15:10:23 +0000 (0:00:02.747) 0:01:52.559 ********* 2025-08-29 15:12:54.240167 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240174 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240181 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240189 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:12:54.240196 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:12:54.240234 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:12:54.240240 | orchestrator | 2025-08-29 15:12:54.240246 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-08-29 15:12:54.240253 | orchestrator | Friday 29 August 2025 15:10:28 +0000 (0:00:05.202) 0:01:57.762 ********* 2025-08-29 15:12:54.240260 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240266 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240273 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240279 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240286 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240298 | orchestrator | 2025-08-29 15:12:54.240304 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 15:12:54.240311 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:02.769) 0:02:00.531 ********* 2025-08-29 15:12:54.240317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240330 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240337 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240344 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240351 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240358 | orchestrator | 2025-08-29 15:12:54.240364 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 15:12:54.240371 | orchestrator | Friday 29 August 2025 15:10:34 +0000 (0:00:02.884) 0:02:03.416 ********* 2025-08-29 15:12:54.240386 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240400 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240407 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240421 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240428 | orchestrator | 2025-08-29 15:12:54.240435 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 15:12:54.240442 | orchestrator | Friday 29 August 2025 15:10:37 +0000 (0:00:02.677) 0:02:06.093 ********* 2025-08-29 15:12:54.240449 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240455 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240461 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240467 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240473 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240480 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240487 | orchestrator | 2025-08-29 15:12:54.240493 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 15:12:54.240499 | orchestrator | Friday 29 August 2025 15:10:40 +0000 (0:00:03.204) 0:02:09.297 ********* 2025-08-29 15:12:54.240505 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240511 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240524 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240530 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240537 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240543 | orchestrator | 2025-08-29 15:12:54.240550 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 15:12:54.240556 | orchestrator | Friday 29 August 2025 15:10:43 +0000 (0:00:02.969) 0:02:12.267 ********* 2025-08-29 15:12:54.240563 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240569 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240581 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240586 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240592 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240598 | orchestrator | 2025-08-29 15:12:54.240611 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 15:12:54.240617 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:03.250) 0:02:15.518 ********* 2025-08-29 15:12:54.240623 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240629 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240635 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240641 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240654 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240660 | orchestrator | 2025-08-29 15:12:54.240665 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 15:12:54.240672 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:03.569) 0:02:19.088 ********* 2025-08-29 15:12:54.240678 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240684 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240690 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240697 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240704 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240710 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240717 | orchestrator | 2025-08-29 15:12:54.240724 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 15:12:54.240731 | orchestrator | Friday 29 August 2025 15:10:53 +0000 (0:00:03.606) 0:02:22.694 ********* 2025-08-29 15:12:54.240738 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:12:54.240754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240760 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:12:54.240767 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240774 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:12:54.240789 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240798 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:12:54.240804 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240811 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:12:54.240818 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240825 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:12:54.240832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240838 | orchestrator | 2025-08-29 15:12:54.240844 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 15:12:54.240851 | orchestrator | Friday 29 August 2025 15:10:56 +0000 (0:00:02.769) 0:02:25.464 ********* 2025-08-29 15:12:54.240860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.240868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.240875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.240882 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.240894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.240915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:12:54.240923 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.240930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.240936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.240943 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.240950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:12:54.240956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.240962 | orchestrator | 2025-08-29 15:12:54.240968 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 15:12:54.240974 | orchestrator | Friday 29 August 2025 15:10:59 +0000 (0:00:03.231) 0:02:28.695 ********* 2025-08-29 15:12:54.240984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.240991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.241012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.241020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.241028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:12:54.241039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:12:54.241051 | orchestrator | 2025-08-29 15:12:54.241058 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:12:54.241065 | orchestrator | Friday 29 August 2025 15:11:02 +0000 (0:00:03.038) 0:02:31.734 ********* 2025-08-29 15:12:54.241071 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:54.241078 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:54.241085 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:54.241091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:12:54.241098 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:12:54.241105 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:12:54.241110 | orchestrator | 2025-08-29 15:12:54.241117 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 15:12:54.241124 | orchestrator | Friday 29 August 2025 15:11:03 +0000 (0:00:00.576) 0:02:32.310 ********* 2025-08-29 15:12:54.241131 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:54.241138 | orchestrator | 2025-08-29 15:12:54.241145 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 15:12:54.241152 | orchestrator | Friday 29 August 2025 15:11:05 +0000 (0:00:02.354) 0:02:34.665 ********* 2025-08-29 15:12:54.241160 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:54.241166 | orchestrator | 2025-08-29 15:12:54.241172 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 15:12:54.241179 | orchestrator | Friday 29 August 2025 15:11:07 +0000 (0:00:02.231) 0:02:36.897 ********* 2025-08-29 15:12:54.241186 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:54.241193 | orchestrator | 2025-08-29 15:12:54.241250 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:12:54.241259 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:43.314) 0:03:20.211 ********* 2025-08-29 15:12:54.241266 | orchestrator | 2025-08-29 15:12:54.241277 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:12:54.241284 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:00.078) 0:03:20.290 ********* 2025-08-29 15:12:54.241291 | orchestrator | 2025-08-29 15:12:54.241298 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:12:54.241305 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:00.055) 0:03:20.346 ********* 2025-08-29 15:12:54.241312 | orchestrator | 2025-08-29 15:12:54.241318 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:12:54.241326 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:00.056) 0:03:20.403 ********* 2025-08-29 15:12:54.241332 | orchestrator | 2025-08-29 15:12:54.241339 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:12:54.241346 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:00.159) 0:03:20.562 ********* 2025-08-29 15:12:54.241353 | orchestrator | 2025-08-29 15:12:54.241360 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:12:54.241368 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:00.053) 0:03:20.615 ********* 2025-08-29 15:12:54.241374 | orchestrator | 2025-08-29 15:12:54.241381 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 15:12:54.241388 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:00.065) 0:03:20.680 ********* 2025-08-29 15:12:54.241394 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:54.241401 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:12:54.241408 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:12:54.241414 | orchestrator | 2025-08-29 15:12:54.241421 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 15:12:54.241427 | orchestrator | Friday 29 August 2025 15:12:22 +0000 (0:00:31.031) 0:03:51.712 ********* 2025-08-29 15:12:54.241434 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:12:54.241441 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:12:54.241448 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:12:54.241464 | orchestrator | 2025-08-29 15:12:54.241471 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:12:54.241479 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:12:54.241488 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:12:54.241495 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:12:54.241502 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 15:12:54.241510 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 15:12:54.241516 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 15:12:54.241523 | orchestrator | 2025-08-29 15:12:54.241530 | orchestrator | 2025-08-29 15:12:54.241537 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:12:54.241544 | orchestrator | Friday 29 August 2025 15:12:51 +0000 (0:00:28.601) 0:04:20.313 ********* 2025-08-29 15:12:54.241551 | orchestrator | =============================================================================== 2025-08-29 15:12:54.241557 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.31s 2025-08-29 15:12:54.241563 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.03s 2025-08-29 15:12:54.241574 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.60s 2025-08-29 15:12:54.241580 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.81s 2025-08-29 15:12:54.241587 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.07s 2025-08-29 15:12:54.241594 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.21s 2025-08-29 15:12:54.241601 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.20s 2025-08-29 15:12:54.241608 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.15s 2025-08-29 15:12:54.241615 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.03s 2025-08-29 15:12:54.241622 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.93s 2025-08-29 15:12:54.241629 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.79s 2025-08-29 15:12:54.241636 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.71s 2025-08-29 15:12:54.241643 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.63s 2025-08-29 15:12:54.241651 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.61s 2025-08-29 15:12:54.241658 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.57s 2025-08-29 15:12:54.241665 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.55s 2025-08-29 15:12:54.241672 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.26s 2025-08-29 15:12:54.241679 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.25s 2025-08-29 15:12:54.241691 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.23s 2025-08-29 15:12:54.241699 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.21s 2025-08-29 15:12:54.241707 | orchestrator | 2025-08-29 15:12:54 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:12:54.241715 | orchestrator | 2025-08-29 15:12:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:57.282555 | orchestrator | 2025-08-29 15:12:57 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:12:57.283159 | orchestrator | 2025-08-29 15:12:57 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:12:57.284628 | orchestrator | 2025-08-29 15:12:57 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:12:57.285873 | orchestrator | 2025-08-29 15:12:57 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:12:57.285918 | orchestrator | 2025-08-29 15:12:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:00.313616 | orchestrator | 2025-08-29 15:13:00 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:13:00.317436 | orchestrator | 2025-08-29 15:13:00 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:00.319715 | orchestrator | 2025-08-29 15:13:00 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:00.321523 | orchestrator | 2025-08-29 15:13:00 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:00.321588 | orchestrator | 2025-08-29 15:13:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:03.366363 | orchestrator | 2025-08-29 15:13:03 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:13:03.368330 | orchestrator | 2025-08-29 15:13:03 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:03.369831 | orchestrator | 2025-08-29 15:13:03 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:03.370945 | orchestrator | 2025-08-29 15:13:03 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:03.370984 | orchestrator | 2025-08-29 15:13:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:06.417492 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state STARTED 2025-08-29 15:13:06.425273 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:06.430505 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:06.436118 | orchestrator | 2025-08-29 15:13:06 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:06.436234 | orchestrator | 2025-08-29 15:13:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:09.511764 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 5bf097ba-2b0d-490c-b454-a92654b81456 is in state SUCCESS 2025-08-29 15:13:09.513582 | orchestrator | 2025-08-29 15:13:09.513636 | orchestrator | 2025-08-29 15:13:09.513667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:09.513673 | orchestrator | 2025-08-29 15:13:09.513677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:09.513682 | orchestrator | Friday 29 August 2025 15:11:52 +0000 (0:00:00.295) 0:00:00.295 ********* 2025-08-29 15:13:09.513686 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:09.513691 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:09.513695 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:09.513699 | orchestrator | 2025-08-29 15:13:09.513703 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:09.513707 | orchestrator | Friday 29 August 2025 15:11:52 +0000 (0:00:00.327) 0:00:00.623 ********* 2025-08-29 15:13:09.513711 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 15:13:09.513716 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 15:13:09.513742 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 15:13:09.513746 | orchestrator | 2025-08-29 15:13:09.513750 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 15:13:09.513755 | orchestrator | 2025-08-29 15:13:09.513762 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:13:09.513766 | orchestrator | Friday 29 August 2025 15:11:53 +0000 (0:00:00.458) 0:00:01.081 ********* 2025-08-29 15:13:09.513771 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:09.513776 | orchestrator | 2025-08-29 15:13:09.513779 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 15:13:09.513783 | orchestrator | Friday 29 August 2025 15:11:53 +0000 (0:00:00.529) 0:00:01.611 ********* 2025-08-29 15:13:09.513787 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 15:13:09.513791 | orchestrator | 2025-08-29 15:13:09.513795 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 15:13:09.513799 | orchestrator | Friday 29 August 2025 15:11:57 +0000 (0:00:03.588) 0:00:05.200 ********* 2025-08-29 15:13:09.513802 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 15:13:09.513807 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 15:13:09.513810 | orchestrator | 2025-08-29 15:13:09.513814 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 15:13:09.513818 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:07.529) 0:00:12.729 ********* 2025-08-29 15:13:09.513822 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:13:09.513826 | orchestrator | 2025-08-29 15:13:09.513830 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 15:13:09.513833 | orchestrator | Friday 29 August 2025 15:12:08 +0000 (0:00:03.414) 0:00:16.144 ********* 2025-08-29 15:13:09.513837 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:13:09.513841 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 15:13:09.513845 | orchestrator | 2025-08-29 15:13:09.513848 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 15:13:09.513852 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:04.223) 0:00:20.367 ********* 2025-08-29 15:13:09.513856 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:13:09.513860 | orchestrator | 2025-08-29 15:13:09.513864 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 15:13:09.513868 | orchestrator | Friday 29 August 2025 15:12:16 +0000 (0:00:03.570) 0:00:23.937 ********* 2025-08-29 15:13:09.513871 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 15:13:09.513875 | orchestrator | 2025-08-29 15:13:09.513879 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:13:09.513882 | orchestrator | Friday 29 August 2025 15:12:20 +0000 (0:00:04.549) 0:00:28.487 ********* 2025-08-29 15:13:09.513886 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:09.513891 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:09.513894 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:09.513898 | orchestrator | 2025-08-29 15:13:09.513902 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 15:13:09.513906 | orchestrator | Friday 29 August 2025 15:12:21 +0000 (0:00:00.662) 0:00:29.149 ********* 2025-08-29 15:13:09.513912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.513939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.513944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.513948 | orchestrator | 2025-08-29 15:13:09.513952 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 15:13:09.513956 | orchestrator | Friday 29 August 2025 15:12:23 +0000 (0:00:01.579) 0:00:30.729 ********* 2025-08-29 15:13:09.513960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:09.513963 | orchestrator | 2025-08-29 15:13:09.513967 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 15:13:09.513971 | orchestrator | Friday 29 August 2025 15:12:23 +0000 (0:00:00.446) 0:00:31.175 ********* 2025-08-29 15:13:09.513975 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:09.513978 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:09.513982 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:09.513986 | orchestrator | 2025-08-29 15:13:09.513990 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:13:09.513993 | orchestrator | Friday 29 August 2025 15:12:24 +0000 (0:00:00.696) 0:00:31.872 ********* 2025-08-29 15:13:09.513997 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:09.514001 | orchestrator | 2025-08-29 15:13:09.514005 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 15:13:09.514008 | orchestrator | Friday 29 August 2025 15:12:24 +0000 (0:00:00.705) 0:00:32.577 ********* 2025-08-29 15:13:09.514046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514073 | orchestrator | 2025-08-29 15:13:09.514076 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 15:13:09.514080 | orchestrator | Friday 29 August 2025 15:12:27 +0000 (0:00:02.432) 0:00:35.010 ********* 2025-08-29 15:13:09.514084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:09.514092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514099 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:09.514109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514113 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:09.514117 | orchestrator | 2025-08-29 15:13:09.514121 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 15:13:09.514125 | orchestrator | Friday 29 August 2025 15:12:28 +0000 (0:00:01.002) 0:00:36.013 ********* 2025-08-29 15:13:09.514128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:09.514136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514140 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:09.514144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514151 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:09.514155 | orchestrator | 2025-08-29 15:13:09.514159 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 15:13:09.514163 | orchestrator | Friday 29 August 2025 15:12:29 +0000 (0:00:00.784) 0:00:36.797 ********* 2025-08-29 15:13:09.514174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514214 | orchestrator | 2025-08-29 15:13:09.514220 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 15:13:09.514233 | orchestrator | Friday 29 August 2025 15:12:30 +0000 (0:00:01.615) 0:00:38.413 ********* 2025-08-29 15:13:09.514240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514268 | orchestrator | 2025-08-29 15:13:09.514272 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 15:13:09.514276 | orchestrator | Friday 29 August 2025 15:12:33 +0000 (0:00:02.320) 0:00:40.733 ********* 2025-08-29 15:13:09.514281 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:13:09.514285 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:13:09.514289 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:13:09.514294 | orchestrator | 2025-08-29 15:13:09.514298 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 15:13:09.514302 | orchestrator | Friday 29 August 2025 15:12:34 +0000 (0:00:01.377) 0:00:42.111 ********* 2025-08-29 15:13:09.514307 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:09.514311 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:09.514315 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:09.514323 | orchestrator | 2025-08-29 15:13:09.514327 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 15:13:09.514332 | orchestrator | Friday 29 August 2025 15:12:36 +0000 (0:00:01.673) 0:00:43.784 ********* 2025-08-29 15:13:09.514336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514341 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:09.514345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:09.514359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:13:09.514363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:09.514367 | orchestrator | 2025-08-29 15:13:09.514371 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 15:13:09.514374 | orchestrator | Friday 29 August 2025 15:12:36 +0000 (0:00:00.486) 0:00:44.271 ********* 2025-08-29 15:13:09.514379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:13:09.514406 | orchestrator | 2025-08-29 15:13:09.514412 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 15:13:09.514418 | orchestrator | Friday 29 August 2025 15:12:38 +0000 (0:00:01.971) 0:00:46.242 ********* 2025-08-29 15:13:09.514424 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:09.514430 | orchestrator | 2025-08-29 15:13:09.514436 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 15:13:09.514442 | orchestrator | Friday 29 August 2025 15:12:40 +0000 (0:00:02.223) 0:00:48.466 ********* 2025-08-29 15:13:09.514449 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:09.514455 | orchestrator | 2025-08-29 15:13:09.514462 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 15:13:09.514466 | orchestrator | Friday 29 August 2025 15:12:43 +0000 (0:00:02.266) 0:00:50.732 ********* 2025-08-29 15:13:09.514472 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:09.514476 | orchestrator | 2025-08-29 15:13:09.514483 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:13:09.514487 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:12.588) 0:01:03.321 ********* 2025-08-29 15:13:09.514490 | orchestrator | 2025-08-29 15:13:09.514494 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:13:09.514498 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:00.074) 0:01:03.396 ********* 2025-08-29 15:13:09.514502 | orchestrator | 2025-08-29 15:13:09.514505 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:13:09.514509 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:00.063) 0:01:03.459 ********* 2025-08-29 15:13:09.514513 | orchestrator | 2025-08-29 15:13:09.514516 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 15:13:09.514524 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:00.080) 0:01:03.540 ********* 2025-08-29 15:13:09.514528 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:09.514532 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:09.514535 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:09.514539 | orchestrator | 2025-08-29 15:13:09.514544 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:09.514551 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:13:09.514558 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:13:09.514564 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:13:09.514570 | orchestrator | 2025-08-29 15:13:09.514576 | orchestrator | 2025-08-29 15:13:09.514582 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:09.514588 | orchestrator | Friday 29 August 2025 15:13:08 +0000 (0:00:12.516) 0:01:16.056 ********* 2025-08-29 15:13:09.514594 | orchestrator | =============================================================================== 2025-08-29 15:13:09.514600 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.59s 2025-08-29 15:13:09.514607 | orchestrator | placement : Restart placement-api container ---------------------------- 12.52s 2025-08-29 15:13:09.514613 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.53s 2025-08-29 15:13:09.514617 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.55s 2025-08-29 15:13:09.514620 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.22s 2025-08-29 15:13:09.514624 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.59s 2025-08-29 15:13:09.514628 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.57s 2025-08-29 15:13:09.514631 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.41s 2025-08-29 15:13:09.514635 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.43s 2025-08-29 15:13:09.514639 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.32s 2025-08-29 15:13:09.514642 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.27s 2025-08-29 15:13:09.514646 | orchestrator | placement : Creating placement databases -------------------------------- 2.22s 2025-08-29 15:13:09.514650 | orchestrator | placement : Check placement containers ---------------------------------- 1.97s 2025-08-29 15:13:09.514654 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.67s 2025-08-29 15:13:09.514657 | orchestrator | placement : Copying over config.json files for services ----------------- 1.62s 2025-08-29 15:13:09.514661 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.58s 2025-08-29 15:13:09.514665 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.38s 2025-08-29 15:13:09.514669 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.00s 2025-08-29 15:13:09.514672 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.78s 2025-08-29 15:13:09.514677 | orchestrator | placement : include_tasks ----------------------------------------------- 0.71s 2025-08-29 15:13:09.514680 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:09.514684 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:09.514690 | orchestrator | 2025-08-29 15:13:09 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:09.514698 | orchestrator | 2025-08-29 15:13:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:12.556287 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:12.557267 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:12.558976 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:12.560003 | orchestrator | 2025-08-29 15:13:12 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:12.560072 | orchestrator | 2025-08-29 15:13:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:15.598608 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:15.599599 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:15.601066 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:15.602628 | orchestrator | 2025-08-29 15:13:15 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:15.602801 | orchestrator | 2025-08-29 15:13:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:18.645955 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:18.647888 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:18.649210 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:18.650811 | orchestrator | 2025-08-29 15:13:18 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:18.650957 | orchestrator | 2025-08-29 15:13:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:21.702721 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:21.703148 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:21.704575 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:21.705516 | orchestrator | 2025-08-29 15:13:21 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:21.705549 | orchestrator | 2025-08-29 15:13:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:24.762793 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:24.765759 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state STARTED 2025-08-29 15:13:24.768218 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:24.770120 | orchestrator | 2025-08-29 15:13:24 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:24.770221 | orchestrator | 2025-08-29 15:13:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:27.821695 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task f1951321-829f-4e66-b80d-7db6591e7aef is in state STARTED 2025-08-29 15:13:27.823582 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:27.826959 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task 58a2ba35-64d9-4c6d-bd09-6a515e1f4fe0 is in state SUCCESS 2025-08-29 15:13:27.829910 | orchestrator | 2025-08-29 15:13:27.829950 | orchestrator | 2025-08-29 15:13:27.829980 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:27.829992 | orchestrator | 2025-08-29 15:13:27.830004 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:27.830050 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:00.312) 0:00:00.312 ********* 2025-08-29 15:13:27.830062 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:27.830074 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:27.830084 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:27.830095 | orchestrator | 2025-08-29 15:13:27.830104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:27.830115 | orchestrator | Friday 29 August 2025 15:10:12 +0000 (0:00:00.465) 0:00:00.778 ********* 2025-08-29 15:13:27.830147 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 15:13:27.830158 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 15:13:27.830203 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 15:13:27.830215 | orchestrator | 2025-08-29 15:13:27.830226 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 15:13:27.830236 | orchestrator | 2025-08-29 15:13:27.830246 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:27.830256 | orchestrator | Friday 29 August 2025 15:10:12 +0000 (0:00:00.554) 0:00:01.333 ********* 2025-08-29 15:13:27.830267 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:27.830278 | orchestrator | 2025-08-29 15:13:27.830288 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 15:13:27.830299 | orchestrator | Friday 29 August 2025 15:10:13 +0000 (0:00:00.662) 0:00:01.995 ********* 2025-08-29 15:13:27.830310 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 15:13:27.830320 | orchestrator | 2025-08-29 15:13:27.830341 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 15:13:27.830352 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:03.810) 0:00:05.805 ********* 2025-08-29 15:13:27.830362 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 15:13:27.830450 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 15:13:27.830479 | orchestrator | 2025-08-29 15:13:27.830489 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 15:13:27.830500 | orchestrator | Friday 29 August 2025 15:10:24 +0000 (0:00:07.142) 0:00:12.948 ********* 2025-08-29 15:13:27.830550 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:13:27.830561 | orchestrator | 2025-08-29 15:13:27.830595 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 15:13:27.830607 | orchestrator | Friday 29 August 2025 15:10:28 +0000 (0:00:04.151) 0:00:17.100 ********* 2025-08-29 15:13:27.830617 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:13:27.830628 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 15:13:27.830639 | orchestrator | 2025-08-29 15:13:27.830649 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 15:13:27.830660 | orchestrator | Friday 29 August 2025 15:10:33 +0000 (0:00:04.548) 0:00:21.648 ********* 2025-08-29 15:13:27.830671 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:13:27.830681 | orchestrator | 2025-08-29 15:13:27.830691 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 15:13:27.830702 | orchestrator | Friday 29 August 2025 15:10:36 +0000 (0:00:03.803) 0:00:25.452 ********* 2025-08-29 15:13:27.830713 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 15:13:27.830724 | orchestrator | 2025-08-29 15:13:27.830750 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 15:13:27.830762 | orchestrator | Friday 29 August 2025 15:10:41 +0000 (0:00:04.752) 0:00:30.204 ********* 2025-08-29 15:13:27.830774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.830806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.830819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.830848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.830995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831069 | orchestrator | 2025-08-29 15:13:27.831080 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 15:13:27.831093 | orchestrator | Friday 29 August 2025 15:10:45 +0000 (0:00:04.100) 0:00:34.304 ********* 2025-08-29 15:13:27.831103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.831112 | orchestrator | 2025-08-29 15:13:27.831122 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 15:13:27.831133 | orchestrator | Friday 29 August 2025 15:10:45 +0000 (0:00:00.177) 0:00:34.482 ********* 2025-08-29 15:13:27.831143 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.831154 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:27.831164 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:27.831192 | orchestrator | 2025-08-29 15:13:27.831203 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:27.831213 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:00.332) 0:00:34.815 ********* 2025-08-29 15:13:27.831224 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:13:27.831235 | orchestrator | 2025-08-29 15:13:27.831245 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 15:13:27.831255 | orchestrator | Friday 29 August 2025 15:10:47 +0000 (0:00:00.858) 0:00:35.674 ********* 2025-08-29 15:13:27.831273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.831285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.831301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.831320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.831521 | orchestrator | 2025-08-29 15:13:27.831532 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 15:13:27.831542 | orchestrator | Friday 29 August 2025 15:10:54 +0000 (0:00:07.519) 0:00:43.193 ********* 2025-08-29 15:13:27.831586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.831603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.831614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.831636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.831649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.831660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.831669 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.831679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.832110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.832139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832252 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:27.832265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.832284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.832303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832347 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:27.832357 | orchestrator | 2025-08-29 15:13:27.832367 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 15:13:27.832376 | orchestrator | Friday 29 August 2025 15:10:56 +0000 (0:00:01.619) 0:00:44.813 ********* 2025-08-29 15:13:27.832388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.832404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.832423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.832480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.832498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.832525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:27.832580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.832605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.832617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.832665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:27.832675 | orchestrator | 2025-08-29 15:13:27.832685 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 15:13:27.832695 | orchestrator | Friday 29 August 2025 15:10:58 +0000 (0:00:02.390) 0:00:47.203 ********* 2025-08-29 15:13:27.832706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.832728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.832741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.832748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.832979 | orchestrator | 2025-08-29 15:13:27.832986 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 15:13:27.832992 | orchestrator | Friday 29 August 2025 15:11:05 +0000 (0:00:06.834) 0:00:54.038 ********* 2025-08-29 15:13:27.832999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.833021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.833032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.833038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833161 | orchestrator | 2025-08-29 15:13:27.833185 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 15:13:27.833197 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:15.459) 0:01:09.498 ********* 2025-08-29 15:13:27.833207 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:13:27.833218 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:13:27.833228 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:13:27.833239 | orchestrator | 2025-08-29 15:13:27.833246 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 15:13:27.833252 | orchestrator | Friday 29 August 2025 15:11:25 +0000 (0:00:04.687) 0:01:14.185 ********* 2025-08-29 15:13:27.833258 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:13:27.833264 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:13:27.833270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:13:27.833276 | orchestrator | 2025-08-29 15:13:27.833282 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 15:13:27.833288 | orchestrator | Friday 29 August 2025 15:11:28 +0000 (0:00:02.943) 0:01:17.129 ********* 2025-08-29 15:13:27.833300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833475 | orchestrator | 2025-08-29 15:13:27.833497 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 15:13:27.833512 | orchestrator | Friday 29 August 2025 15:11:32 +0000 (0:00:03.595) 0:01:20.725 ********* 2025-08-29 15:13:27.833523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.833850 | orchestrator | 2025-08-29 15:13:27.833857 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:27.833863 | orchestrator | Friday 29 August 2025 15:11:35 +0000 (0:00:02.933) 0:01:23.659 ********* 2025-08-29 15:13:27.833869 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.833876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:27.833882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:27.833888 | orchestrator | 2025-08-29 15:13:27.833894 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 15:13:27.833900 | orchestrator | Friday 29 August 2025 15:11:35 +0000 (0:00:00.838) 0:01:24.498 ********* 2025-08-29 15:13:27.833910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.833931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833957 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.833967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.833973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.833987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.833993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:13:27.834000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:13:27.834039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834080 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:27.834087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:13:27.834093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:27.834099 | orchestrator | 2025-08-29 15:13:27.834106 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 15:13:27.834112 | orchestrator | Friday 29 August 2025 15:11:37 +0000 (0:00:01.547) 0:01:26.046 ********* 2025-08-29 15:13:27.834122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.834134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.834145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:13:27.834152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:13:27.834289 | orchestrator | 2025-08-29 15:13:27.834296 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:13:27.834302 | orchestrator | Friday 29 August 2025 15:11:43 +0000 (0:00:06.016) 0:01:32.062 ********* 2025-08-29 15:13:27.834308 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:27.834314 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:27.834324 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:27.834330 | orchestrator | 2025-08-29 15:13:27.834337 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 15:13:27.834343 | orchestrator | Friday 29 August 2025 15:11:44 +0000 (0:00:00.518) 0:01:32.581 ********* 2025-08-29 15:13:27.834349 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 15:13:27.834355 | orchestrator | 2025-08-29 15:13:27.834362 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 15:13:27.834368 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:02.518) 0:01:35.100 ********* 2025-08-29 15:13:27.834374 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:13:27.834380 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 15:13:27.834387 | orchestrator | 2025-08-29 15:13:27.834393 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 15:13:27.834402 | orchestrator | Friday 29 August 2025 15:11:49 +0000 (0:00:03.056) 0:01:38.156 ********* 2025-08-29 15:13:27.834408 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834415 | orchestrator | 2025-08-29 15:13:27.834422 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:13:27.834428 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:15.498) 0:01:53.654 ********* 2025-08-29 15:13:27.834435 | orchestrator | 2025-08-29 15:13:27.834442 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:13:27.834449 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:00.069) 0:01:53.724 ********* 2025-08-29 15:13:27.834455 | orchestrator | 2025-08-29 15:13:27.834462 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:13:27.834469 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:00.071) 0:01:53.795 ********* 2025-08-29 15:13:27.834475 | orchestrator | 2025-08-29 15:13:27.834482 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 15:13:27.834489 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:00.071) 0:01:53.867 ********* 2025-08-29 15:13:27.834496 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834503 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:27.834510 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:27.834517 | orchestrator | 2025-08-29 15:13:27.834524 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 15:13:27.834531 | orchestrator | Friday 29 August 2025 15:12:20 +0000 (0:00:14.966) 0:02:08.834 ********* 2025-08-29 15:13:27.834538 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834545 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:27.834551 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:27.834558 | orchestrator | 2025-08-29 15:13:27.834565 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 15:13:27.834572 | orchestrator | Friday 29 August 2025 15:12:33 +0000 (0:00:13.365) 0:02:22.199 ********* 2025-08-29 15:13:27.834578 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834588 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:27.834595 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:27.834602 | orchestrator | 2025-08-29 15:13:27.834609 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 15:13:27.834616 | orchestrator | Friday 29 August 2025 15:12:44 +0000 (0:00:10.818) 0:02:33.017 ********* 2025-08-29 15:13:27.834623 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:27.834630 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834636 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:27.834643 | orchestrator | 2025-08-29 15:13:27.834650 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 15:13:27.834657 | orchestrator | Friday 29 August 2025 15:12:54 +0000 (0:00:10.335) 0:02:43.353 ********* 2025-08-29 15:13:27.834664 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834671 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:27.834681 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:27.834688 | orchestrator | 2025-08-29 15:13:27.834695 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 15:13:27.834702 | orchestrator | Friday 29 August 2025 15:13:07 +0000 (0:00:12.235) 0:02:55.589 ********* 2025-08-29 15:13:27.834709 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834716 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:27.834722 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:27.834729 | orchestrator | 2025-08-29 15:13:27.834736 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 15:13:27.834742 | orchestrator | Friday 29 August 2025 15:13:18 +0000 (0:00:11.782) 0:03:07.371 ********* 2025-08-29 15:13:27.834749 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:27.834756 | orchestrator | 2025-08-29 15:13:27.834763 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:27.834770 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:13:27.834778 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:13:27.834784 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:13:27.834790 | orchestrator | 2025-08-29 15:13:27.834796 | orchestrator | 2025-08-29 15:13:27.834802 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:27.834809 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:07.360) 0:03:14.732 ********* 2025-08-29 15:13:27.834815 | orchestrator | =============================================================================== 2025-08-29 15:13:27.834821 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.50s 2025-08-29 15:13:27.834827 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.46s 2025-08-29 15:13:27.834833 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.97s 2025-08-29 15:13:27.834839 | orchestrator | designate : Restart designate-api container ---------------------------- 13.37s 2025-08-29 15:13:27.834845 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.24s 2025-08-29 15:13:27.834851 | orchestrator | designate : Restart designate-worker container ------------------------- 11.78s 2025-08-29 15:13:27.834857 | orchestrator | designate : Restart designate-central container ------------------------ 10.82s 2025-08-29 15:13:27.834863 | orchestrator | designate : Restart designate-producer container ----------------------- 10.34s 2025-08-29 15:13:27.834870 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.52s 2025-08-29 15:13:27.834876 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.36s 2025-08-29 15:13:27.834885 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.14s 2025-08-29 15:13:27.834891 | orchestrator | designate : Copying over config.json files for services ----------------- 6.84s 2025-08-29 15:13:27.834897 | orchestrator | designate : Check designate containers ---------------------------------- 6.02s 2025-08-29 15:13:27.834903 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.75s 2025-08-29 15:13:27.834910 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.69s 2025-08-29 15:13:27.834916 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.55s 2025-08-29 15:13:27.834922 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.15s 2025-08-29 15:13:27.834928 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.10s 2025-08-29 15:13:27.834934 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.81s 2025-08-29 15:13:27.834940 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.80s 2025-08-29 15:13:27.834952 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:27.834958 | orchestrator | 2025-08-29 15:13:27 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:27.834964 | orchestrator | 2025-08-29 15:13:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:30.882425 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task f1951321-829f-4e66-b80d-7db6591e7aef is in state STARTED 2025-08-29 15:13:30.882544 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:30.885132 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:30.885916 | orchestrator | 2025-08-29 15:13:30 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:30.886299 | orchestrator | 2025-08-29 15:13:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:33.927859 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task f1951321-829f-4e66-b80d-7db6591e7aef is in state SUCCESS 2025-08-29 15:13:33.931832 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:33.936405 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:33.939554 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:33.941613 | orchestrator | 2025-08-29 15:13:33 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:33.941869 | orchestrator | 2025-08-29 15:13:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:36.984982 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:36.985646 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:36.986977 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:36.988411 | orchestrator | 2025-08-29 15:13:36 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:36.988482 | orchestrator | 2025-08-29 15:13:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:40.061765 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:40.061972 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:40.063433 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:40.064084 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:40.064126 | orchestrator | 2025-08-29 15:13:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:43.090044 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:43.090150 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:43.092123 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:43.092729 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:43.092783 | orchestrator | 2025-08-29 15:13:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:46.154357 | orchestrator | 2025-08-29 15:13:46 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:46.157645 | orchestrator | 2025-08-29 15:13:46 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:46.159867 | orchestrator | 2025-08-29 15:13:46 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:46.162721 | orchestrator | 2025-08-29 15:13:46 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:46.163294 | orchestrator | 2025-08-29 15:13:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:49.198754 | orchestrator | 2025-08-29 15:13:49 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:49.202619 | orchestrator | 2025-08-29 15:13:49 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:49.204532 | orchestrator | 2025-08-29 15:13:49 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:49.206938 | orchestrator | 2025-08-29 15:13:49 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:49.206979 | orchestrator | 2025-08-29 15:13:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:52.238596 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:52.238666 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:52.240903 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:52.241308 | orchestrator | 2025-08-29 15:13:52 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:52.241470 | orchestrator | 2025-08-29 15:13:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:55.283678 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:55.285925 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:55.286007 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:55.287103 | orchestrator | 2025-08-29 15:13:55 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:55.287199 | orchestrator | 2025-08-29 15:13:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:58.328745 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:13:58.330257 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:13:58.332355 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:13:58.333811 | orchestrator | 2025-08-29 15:13:58 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:13:58.334286 | orchestrator | 2025-08-29 15:13:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:01.385021 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:01.385926 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:01.386879 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:01.388002 | orchestrator | 2025-08-29 15:14:01 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:01.388085 | orchestrator | 2025-08-29 15:14:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:04.420979 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:04.422003 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:04.422890 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:04.423566 | orchestrator | 2025-08-29 15:14:04 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:04.423745 | orchestrator | 2025-08-29 15:14:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:07.482935 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:07.485595 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:07.487585 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:07.489786 | orchestrator | 2025-08-29 15:14:07 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:07.489820 | orchestrator | 2025-08-29 15:14:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:10.531290 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:10.532263 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:10.533784 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:10.534498 | orchestrator | 2025-08-29 15:14:10 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:10.535364 | orchestrator | 2025-08-29 15:14:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:13.573206 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:13.574105 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:13.575208 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:13.576231 | orchestrator | 2025-08-29 15:14:13 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:13.576331 | orchestrator | 2025-08-29 15:14:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:16.619645 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:16.622422 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:16.625015 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:16.626904 | orchestrator | 2025-08-29 15:14:16 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:16.626954 | orchestrator | 2025-08-29 15:14:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:19.672363 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:19.675499 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:19.677820 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:19.679902 | orchestrator | 2025-08-29 15:14:19 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:19.679945 | orchestrator | 2025-08-29 15:14:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:22.714606 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:22.716262 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:22.717220 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:22.717859 | orchestrator | 2025-08-29 15:14:22 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:22.717892 | orchestrator | 2025-08-29 15:14:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:25.754390 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:25.754461 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:25.754467 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:25.754472 | orchestrator | 2025-08-29 15:14:25 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:25.754476 | orchestrator | 2025-08-29 15:14:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:28.772148 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:28.772373 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:28.773030 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:28.775429 | orchestrator | 2025-08-29 15:14:28 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:28.775476 | orchestrator | 2025-08-29 15:14:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:31.799352 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:31.800758 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:31.801379 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:31.802099 | orchestrator | 2025-08-29 15:14:31 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:31.802227 | orchestrator | 2025-08-29 15:14:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:34.836817 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:34.838560 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:34.840390 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:34.842068 | orchestrator | 2025-08-29 15:14:34 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:34.842157 | orchestrator | 2025-08-29 15:14:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:37.903876 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:37.906226 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:37.908123 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:37.910619 | orchestrator | 2025-08-29 15:14:37 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:37.910703 | orchestrator | 2025-08-29 15:14:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:40.964468 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:40.966864 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:40.968777 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:40.970625 | orchestrator | 2025-08-29 15:14:40 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:40.970672 | orchestrator | 2025-08-29 15:14:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:44.025507 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:44.025590 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:44.026190 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:44.027584 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:44.027639 | orchestrator | 2025-08-29 15:14:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:47.067580 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:47.070307 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:47.072728 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:47.075521 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:47.075583 | orchestrator | 2025-08-29 15:14:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:50.130392 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:50.133809 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:50.140077 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:50.141843 | orchestrator | 2025-08-29 15:14:50 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:50.141898 | orchestrator | 2025-08-29 15:14:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:53.196514 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:53.200525 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:53.202610 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:53.204804 | orchestrator | 2025-08-29 15:14:53 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:53.204851 | orchestrator | 2025-08-29 15:14:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:56.252034 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:56.260200 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:56.261291 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:56.263180 | orchestrator | 2025-08-29 15:14:56 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state STARTED 2025-08-29 15:14:56.263261 | orchestrator | 2025-08-29 15:14:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:59.320010 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:14:59.324934 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:14:59.332717 | orchestrator | 2025-08-29 15:14:59 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:14:59.339443 | orchestrator | 2025-08-29 15:14:59.339560 | orchestrator | 2025-08-29 15:14:59.339574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:14:59.339585 | orchestrator | 2025-08-29 15:14:59.339594 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:14:59.339659 | orchestrator | Friday 29 August 2025 15:13:30 +0000 (0:00:00.176) 0:00:00.176 ********* 2025-08-29 15:14:59.339669 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:59.339679 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:59.339687 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:59.339696 | orchestrator | 2025-08-29 15:14:59.339705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:14:59.339714 | orchestrator | Friday 29 August 2025 15:13:30 +0000 (0:00:00.360) 0:00:00.536 ********* 2025-08-29 15:14:59.339723 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:14:59.339732 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:14:59.339741 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:14:59.339749 | orchestrator | 2025-08-29 15:14:59.339758 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 15:14:59.339767 | orchestrator | 2025-08-29 15:14:59.339775 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 15:14:59.339784 | orchestrator | Friday 29 August 2025 15:13:31 +0000 (0:00:00.684) 0:00:01.221 ********* 2025-08-29 15:14:59.339793 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:59.339801 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:59.339810 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:59.339818 | orchestrator | 2025-08-29 15:14:59.339827 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:14:59.339836 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:14:59.339847 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:14:59.339855 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:14:59.339864 | orchestrator | 2025-08-29 15:14:59.339873 | orchestrator | 2025-08-29 15:14:59.339881 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:14:59.339890 | orchestrator | Friday 29 August 2025 15:13:32 +0000 (0:00:00.694) 0:00:01.916 ********* 2025-08-29 15:14:59.339922 | orchestrator | =============================================================================== 2025-08-29 15:14:59.339931 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.69s 2025-08-29 15:14:59.339940 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-08-29 15:14:59.339949 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-08-29 15:14:59.339958 | orchestrator | 2025-08-29 15:14:59.339966 | orchestrator | 2025-08-29 15:14:59.339975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:14:59.339983 | orchestrator | 2025-08-29 15:14:59.339992 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:14:59.340002 | orchestrator | Friday 29 August 2025 15:12:55 +0000 (0:00:00.368) 0:00:00.368 ********* 2025-08-29 15:14:59.340011 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:59.340021 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:59.340030 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:59.340040 | orchestrator | 2025-08-29 15:14:59.340049 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:14:59.340059 | orchestrator | Friday 29 August 2025 15:12:56 +0000 (0:00:00.390) 0:00:00.758 ********* 2025-08-29 15:14:59.340068 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 15:14:59.340095 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 15:14:59.340105 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 15:14:59.340114 | orchestrator | 2025-08-29 15:14:59.340124 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 15:14:59.340134 | orchestrator | 2025-08-29 15:14:59.340143 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:14:59.340151 | orchestrator | Friday 29 August 2025 15:12:57 +0000 (0:00:00.810) 0:00:01.568 ********* 2025-08-29 15:14:59.340160 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:59.340169 | orchestrator | 2025-08-29 15:14:59.340177 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 15:14:59.340186 | orchestrator | Friday 29 August 2025 15:12:58 +0000 (0:00:01.400) 0:00:02.969 ********* 2025-08-29 15:14:59.340195 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 15:14:59.340204 | orchestrator | 2025-08-29 15:14:59.340212 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 15:14:59.340221 | orchestrator | Friday 29 August 2025 15:13:03 +0000 (0:00:04.785) 0:00:07.755 ********* 2025-08-29 15:14:59.340229 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 15:14:59.340252 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 15:14:59.340261 | orchestrator | 2025-08-29 15:14:59.340270 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 15:14:59.340278 | orchestrator | Friday 29 August 2025 15:13:11 +0000 (0:00:07.806) 0:00:15.562 ********* 2025-08-29 15:14:59.340287 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:14:59.340296 | orchestrator | 2025-08-29 15:14:59.340304 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 15:14:59.340313 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:03.515) 0:00:19.078 ********* 2025-08-29 15:14:59.340337 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:14:59.340346 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 15:14:59.340355 | orchestrator | 2025-08-29 15:14:59.340363 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 15:14:59.340372 | orchestrator | Friday 29 August 2025 15:13:18 +0000 (0:00:04.234) 0:00:23.312 ********* 2025-08-29 15:14:59.340380 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:14:59.340389 | orchestrator | 2025-08-29 15:14:59.340398 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 15:14:59.340413 | orchestrator | Friday 29 August 2025 15:13:22 +0000 (0:00:03.893) 0:00:27.206 ********* 2025-08-29 15:14:59.340422 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 15:14:59.340430 | orchestrator | 2025-08-29 15:14:59.340439 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 15:14:59.340447 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:03.585) 0:00:30.792 ********* 2025-08-29 15:14:59.340456 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.340464 | orchestrator | 2025-08-29 15:14:59.340473 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 15:14:59.340481 | orchestrator | Friday 29 August 2025 15:13:29 +0000 (0:00:02.870) 0:00:33.663 ********* 2025-08-29 15:14:59.340490 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.340499 | orchestrator | 2025-08-29 15:14:59.340507 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 15:14:59.340515 | orchestrator | Friday 29 August 2025 15:13:32 +0000 (0:00:03.548) 0:00:37.211 ********* 2025-08-29 15:14:59.340524 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.340533 | orchestrator | 2025-08-29 15:14:59.340541 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 15:14:59.340550 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:03.453) 0:00:40.665 ********* 2025-08-29 15:14:59.340562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.340579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.340589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.340611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.340622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.340661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.340671 | orchestrator | 2025-08-29 15:14:59.340680 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 15:14:59.340689 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:01.513) 0:00:42.180 ********* 2025-08-29 15:14:59.340698 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:59.340706 | orchestrator | 2025-08-29 15:14:59.340715 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 15:14:59.340724 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:00.130) 0:00:42.310 ********* 2025-08-29 15:14:59.340732 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:59.340741 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:59.340750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:59.340758 | orchestrator | 2025-08-29 15:14:59.340767 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 15:14:59.340775 | orchestrator | Friday 29 August 2025 15:13:38 +0000 (0:00:00.838) 0:00:43.149 ********* 2025-08-29 15:14:59.340784 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:14:59.340793 | orchestrator | 2025-08-29 15:14:59.340801 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 15:14:59.340810 | orchestrator | Friday 29 August 2025 15:13:39 +0000 (0:00:01.136) 0:00:44.285 ********* 2025-08-29 15:14:59.340823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.340846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.340856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.340865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.340874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.340892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.340902 | orchestrator | 2025-08-29 15:14:59.340911 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 15:14:59.340923 | orchestrator | Friday 29 August 2025 15:13:42 +0000 (0:00:02.946) 0:00:47.231 ********* 2025-08-29 15:14:59.340932 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:59.340941 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:59.340950 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:59.340958 | orchestrator | 2025-08-29 15:14:59.340967 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:14:59.340975 | orchestrator | Friday 29 August 2025 15:13:43 +0000 (0:00:00.414) 0:00:47.646 ********* 2025-08-29 15:14:59.340984 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:59.340993 | orchestrator | 2025-08-29 15:14:59.341001 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 15:14:59.341010 | orchestrator | Friday 29 August 2025 15:13:43 +0000 (0:00:00.697) 0:00:48.343 ********* 2025-08-29 15:14:59.341019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341151 | orchestrator | 2025-08-29 15:14:59.341160 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 15:14:59.341169 | orchestrator | Friday 29 August 2025 15:13:46 +0000 (0:00:02.379) 0:00:50.723 ********* 2025-08-29 15:14:59.341179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341203 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:59.341223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341251 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:59.341260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:59.341284 | orchestrator | 2025-08-29 15:14:59.341293 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 15:14:59.341301 | orchestrator | Friday 29 August 2025 15:13:46 +0000 (0:00:00.680) 0:00:51.403 ********* 2025-08-29 15:14:59.341320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341347 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:59.341356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341380 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:59.341389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341411 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:59.341420 | orchestrator | 2025-08-29 15:14:59.341429 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 15:14:59.341438 | orchestrator | Friday 29 August 2025 15:13:48 +0000 (0:00:01.255) 0:00:52.659 ********* 2025-08-29 15:14:59.341452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'conta2025-08-29 15:14:59 | INFO  | Task 23b5e544-5bc4-4ec1-94b7-03756c6e6779 is in state SUCCESS 2025-08-29 15:14:59.341704 | orchestrator | iner_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341788 | orchestrator | 2025-08-29 15:14:59.341797 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 15:14:59.341805 | orchestrator | Friday 29 August 2025 15:13:50 +0000 (0:00:02.441) 0:00:55.101 ********* 2025-08-29 15:14:59.341814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.341857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.341891 | orchestrator | 2025-08-29 15:14:59.341900 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 15:14:59.341908 | orchestrator | Friday 29 August 2025 15:13:55 +0000 (0:00:05.014) 0:01:00.115 ********* 2025-08-29 15:14:59.341917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341940 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:59.341956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.341979 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:59.341988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:14:59.341997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:14:59.342006 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:59.342061 | orchestrator | 2025-08-29 15:14:59.342073 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 15:14:59.342133 | orchestrator | Friday 29 August 2025 15:13:56 +0000 (0:00:00.712) 0:01:00.828 ********* 2025-08-29 15:14:59.342177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.342189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.342205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:14:59.342214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.342223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.342242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:59.342251 | orchestrator | 2025-08-29 15:14:59.342260 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:14:59.342269 | orchestrator | Friday 29 August 2025 15:13:58 +0000 (0:00:02.172) 0:01:03.000 ********* 2025-08-29 15:14:59.342277 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:59.342286 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:59.342295 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:59.342310 | orchestrator | 2025-08-29 15:14:59.342320 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 15:14:59.342330 | orchestrator | Friday 29 August 2025 15:13:58 +0000 (0:00:00.290) 0:01:03.291 ********* 2025-08-29 15:14:59.342340 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.342349 | orchestrator | 2025-08-29 15:14:59.342358 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 15:14:59.342368 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:02.406) 0:01:05.698 ********* 2025-08-29 15:14:59.342377 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.342387 | orchestrator | 2025-08-29 15:14:59.342396 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 15:14:59.342406 | orchestrator | Friday 29 August 2025 15:14:03 +0000 (0:00:02.291) 0:01:07.990 ********* 2025-08-29 15:14:59.342415 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.342424 | orchestrator | 2025-08-29 15:14:59.342434 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:14:59.342443 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:15.817) 0:01:23.807 ********* 2025-08-29 15:14:59.342452 | orchestrator | 2025-08-29 15:14:59.342462 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:14:59.342471 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:00.065) 0:01:23.872 ********* 2025-08-29 15:14:59.342480 | orchestrator | 2025-08-29 15:14:59.342490 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:14:59.342499 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:00.063) 0:01:23.936 ********* 2025-08-29 15:14:59.342508 | orchestrator | 2025-08-29 15:14:59.342518 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 15:14:59.342527 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:00.064) 0:01:24.000 ********* 2025-08-29 15:14:59.342536 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.342546 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:59.342556 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:59.342565 | orchestrator | 2025-08-29 15:14:59.342574 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 15:14:59.342584 | orchestrator | Friday 29 August 2025 15:14:42 +0000 (0:00:23.379) 0:01:47.380 ********* 2025-08-29 15:14:59.342593 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:59.342603 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:59.342612 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:59.342622 | orchestrator | 2025-08-29 15:14:59.342631 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:14:59.342641 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:14:59.342652 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:14:59.342662 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:14:59.342672 | orchestrator | 2025-08-29 15:14:59.342681 | orchestrator | 2025-08-29 15:14:59.342691 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:14:59.342700 | orchestrator | Friday 29 August 2025 15:14:58 +0000 (0:00:15.216) 0:02:02.596 ********* 2025-08-29 15:14:59.342708 | orchestrator | =============================================================================== 2025-08-29 15:14:59.342717 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 23.38s 2025-08-29 15:14:59.342726 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.82s 2025-08-29 15:14:59.342734 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.22s 2025-08-29 15:14:59.342742 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.81s 2025-08-29 15:14:59.342757 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.01s 2025-08-29 15:14:59.342766 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.79s 2025-08-29 15:14:59.342774 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.23s 2025-08-29 15:14:59.342782 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.89s 2025-08-29 15:14:59.342799 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.59s 2025-08-29 15:14:59.342808 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.55s 2025-08-29 15:14:59.342817 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.52s 2025-08-29 15:14:59.342825 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.45s 2025-08-29 15:14:59.342834 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.95s 2025-08-29 15:14:59.342842 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.87s 2025-08-29 15:14:59.342851 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.44s 2025-08-29 15:14:59.342859 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.41s 2025-08-29 15:14:59.342872 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.38s 2025-08-29 15:14:59.342881 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2025-08-29 15:14:59.342889 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.17s 2025-08-29 15:14:59.342897 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.52s 2025-08-29 15:14:59.342906 | orchestrator | 2025-08-29 15:14:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:02.386007 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:02.387557 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:02.389220 | orchestrator | 2025-08-29 15:15:02 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:15:02.389270 | orchestrator | 2025-08-29 15:15:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:05.433153 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:05.434970 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:05.437793 | orchestrator | 2025-08-29 15:15:05 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:15:05.437909 | orchestrator | 2025-08-29 15:15:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:08.498263 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:08.501860 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:08.503834 | orchestrator | 2025-08-29 15:15:08 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state STARTED 2025-08-29 15:15:08.504369 | orchestrator | 2025-08-29 15:15:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:11.556886 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:11.559565 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:11.564667 | orchestrator | 2025-08-29 15:15:11 | INFO  | Task 4bd5c186-64bc-42ff-9604-39460f6bd590 is in state SUCCESS 2025-08-29 15:15:11.567160 | orchestrator | 2025-08-29 15:15:11.567233 | orchestrator | 2025-08-29 15:15:11.567293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:15:11.567301 | orchestrator | 2025-08-29 15:15:11.567307 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 15:15:11.567316 | orchestrator | Friday 29 August 2025 15:06:10 +0000 (0:00:00.294) 0:00:00.294 ********* 2025-08-29 15:15:11.567323 | orchestrator | changed: [testbed-manager] 2025-08-29 15:15:11.567330 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567334 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.567339 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.567344 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.567348 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.567353 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.567358 | orchestrator | 2025-08-29 15:15:11.567362 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:15:11.567367 | orchestrator | Friday 29 August 2025 15:06:11 +0000 (0:00:00.918) 0:00:01.213 ********* 2025-08-29 15:15:11.567372 | orchestrator | changed: [testbed-manager] 2025-08-29 15:15:11.567376 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567381 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.567386 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.567390 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.567395 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.567399 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.567404 | orchestrator | 2025-08-29 15:15:11.567408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:15:11.567413 | orchestrator | Friday 29 August 2025 15:06:12 +0000 (0:00:01.009) 0:00:02.223 ********* 2025-08-29 15:15:11.567418 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 15:15:11.567422 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:15:11.567427 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:15:11.567443 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:15:11.567447 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 15:15:11.567452 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 15:15:11.567456 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 15:15:11.567461 | orchestrator | 2025-08-29 15:15:11.567466 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 15:15:11.567470 | orchestrator | 2025-08-29 15:15:11.567475 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:15:11.567479 | orchestrator | Friday 29 August 2025 15:06:13 +0000 (0:00:01.244) 0:00:03.468 ********* 2025-08-29 15:15:11.567484 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.567488 | orchestrator | 2025-08-29 15:15:11.567493 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 15:15:11.567498 | orchestrator | Friday 29 August 2025 15:06:14 +0000 (0:00:01.190) 0:00:04.658 ********* 2025-08-29 15:15:11.567503 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 15:15:11.567509 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 15:15:11.567514 | orchestrator | 2025-08-29 15:15:11.567519 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 15:15:11.567523 | orchestrator | Friday 29 August 2025 15:06:19 +0000 (0:00:04.546) 0:00:09.205 ********* 2025-08-29 15:15:11.567528 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:15:11.567532 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:15:11.567537 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567541 | orchestrator | 2025-08-29 15:15:11.567546 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:15:11.567551 | orchestrator | Friday 29 August 2025 15:06:23 +0000 (0:00:04.494) 0:00:13.699 ********* 2025-08-29 15:15:11.567561 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567566 | orchestrator | 2025-08-29 15:15:11.567570 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 15:15:11.567575 | orchestrator | Friday 29 August 2025 15:06:24 +0000 (0:00:00.702) 0:00:14.401 ********* 2025-08-29 15:15:11.567579 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567584 | orchestrator | 2025-08-29 15:15:11.567588 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 15:15:11.567593 | orchestrator | Friday 29 August 2025 15:06:26 +0000 (0:00:01.617) 0:00:16.019 ********* 2025-08-29 15:15:11.567597 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567602 | orchestrator | 2025-08-29 15:15:11.567606 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:15:11.567611 | orchestrator | Friday 29 August 2025 15:06:28 +0000 (0:00:02.569) 0:00:18.589 ********* 2025-08-29 15:15:11.567615 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.567620 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.567624 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.567629 | orchestrator | 2025-08-29 15:15:11.567634 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:15:11.567638 | orchestrator | Friday 29 August 2025 15:06:29 +0000 (0:00:00.366) 0:00:18.956 ********* 2025-08-29 15:15:11.567643 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.567647 | orchestrator | 2025-08-29 15:15:11.567652 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 15:15:11.567656 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:29.004) 0:00:47.960 ********* 2025-08-29 15:15:11.567661 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.567666 | orchestrator | 2025-08-29 15:15:11.567670 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:15:11.567675 | orchestrator | Friday 29 August 2025 15:07:13 +0000 (0:00:15.715) 0:01:03.675 ********* 2025-08-29 15:15:11.567679 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.567684 | orchestrator | 2025-08-29 15:15:11.567688 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:15:11.567693 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:14.771) 0:01:18.447 ********* 2025-08-29 15:15:11.567710 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.567718 | orchestrator | 2025-08-29 15:15:11.567725 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 15:15:11.567732 | orchestrator | Friday 29 August 2025 15:07:29 +0000 (0:00:01.135) 0:01:19.582 ********* 2025-08-29 15:15:11.567743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.567752 | orchestrator | 2025-08-29 15:15:11.567763 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:15:11.567770 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.459) 0:01:20.042 ********* 2025-08-29 15:15:11.567777 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.567784 | orchestrator | 2025-08-29 15:15:11.567791 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:15:11.567798 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.581) 0:01:20.623 ********* 2025-08-29 15:15:11.567804 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.567811 | orchestrator | 2025-08-29 15:15:11.567818 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:15:11.567824 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:18.500) 0:01:39.123 ********* 2025-08-29 15:15:11.567831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.567837 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.567844 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.567851 | orchestrator | 2025-08-29 15:15:11.567859 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 15:15:11.567908 | orchestrator | 2025-08-29 15:15:11.567917 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:15:11.567956 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:00.275) 0:01:39.399 ********* 2025-08-29 15:15:11.567964 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.568017 | orchestrator | 2025-08-29 15:15:11.568026 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 15:15:11.568031 | orchestrator | Friday 29 August 2025 15:07:50 +0000 (0:00:00.632) 0:01:40.032 ********* 2025-08-29 15:15:11.568036 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568040 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568045 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.568049 | orchestrator | 2025-08-29 15:15:11.568054 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 15:15:11.568059 | orchestrator | Friday 29 August 2025 15:07:52 +0000 (0:00:02.247) 0:01:42.279 ********* 2025-08-29 15:15:11.568081 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568089 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568093 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.568098 | orchestrator | 2025-08-29 15:15:11.568102 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:15:11.568107 | orchestrator | Friday 29 August 2025 15:07:54 +0000 (0:00:02.477) 0:01:44.757 ********* 2025-08-29 15:15:11.568112 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568116 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568121 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568125 | orchestrator | 2025-08-29 15:15:11.568130 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:15:11.568134 | orchestrator | Friday 29 August 2025 15:07:55 +0000 (0:00:00.491) 0:01:45.249 ********* 2025-08-29 15:15:11.568139 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:15:11.568143 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568148 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:15:11.568152 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568157 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:15:11.568162 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 15:15:11.568166 | orchestrator | 2025-08-29 15:15:11.568171 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:15:11.568175 | orchestrator | Friday 29 August 2025 15:08:04 +0000 (0:00:09.461) 0:01:54.710 ********* 2025-08-29 15:15:11.568180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568184 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568189 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568193 | orchestrator | 2025-08-29 15:15:11.568198 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:15:11.568202 | orchestrator | Friday 29 August 2025 15:08:05 +0000 (0:00:00.368) 0:01:55.079 ********* 2025-08-29 15:15:11.568207 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:15:11.568212 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:15:11.568216 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568220 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568225 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:15:11.568229 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568234 | orchestrator | 2025-08-29 15:15:11.568238 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:15:11.568243 | orchestrator | Friday 29 August 2025 15:08:05 +0000 (0:00:00.800) 0:01:55.880 ********* 2025-08-29 15:15:11.568247 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568252 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.568256 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568261 | orchestrator | 2025-08-29 15:15:11.568265 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 15:15:11.568275 | orchestrator | Friday 29 August 2025 15:08:06 +0000 (0:00:00.601) 0:01:56.482 ********* 2025-08-29 15:15:11.568279 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568284 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568288 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.568293 | orchestrator | 2025-08-29 15:15:11.568297 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 15:15:11.568302 | orchestrator | Friday 29 August 2025 15:08:07 +0000 (0:00:01.298) 0:01:57.781 ********* 2025-08-29 15:15:11.568306 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568311 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568323 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.568328 | orchestrator | 2025-08-29 15:15:11.568332 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 15:15:11.568337 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:02.265) 0:02:00.046 ********* 2025-08-29 15:15:11.568341 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568346 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568350 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.568355 | orchestrator | 2025-08-29 15:15:11.568360 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:15:11.568364 | orchestrator | Friday 29 August 2025 15:08:30 +0000 (0:00:19.998) 0:02:20.044 ********* 2025-08-29 15:15:11.568369 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568378 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.568382 | orchestrator | 2025-08-29 15:15:11.568387 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:15:11.568391 | orchestrator | Friday 29 August 2025 15:08:41 +0000 (0:00:11.730) 0:02:31.775 ********* 2025-08-29 15:15:11.568396 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.568401 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568405 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568409 | orchestrator | 2025-08-29 15:15:11.568414 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 15:15:11.568418 | orchestrator | Friday 29 August 2025 15:08:43 +0000 (0:00:01.292) 0:02:33.067 ********* 2025-08-29 15:15:11.568423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568428 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568432 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.568436 | orchestrator | 2025-08-29 15:15:11.568441 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 15:15:11.568446 | orchestrator | Friday 29 August 2025 15:08:54 +0000 (0:00:11.310) 0:02:44.377 ********* 2025-08-29 15:15:11.568450 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568476 | orchestrator | 2025-08-29 15:15:11.568483 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:15:11.568491 | orchestrator | Friday 29 August 2025 15:08:56 +0000 (0:00:01.599) 0:02:45.977 ********* 2025-08-29 15:15:11.568497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568510 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568518 | orchestrator | 2025-08-29 15:15:11.568526 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 15:15:11.568534 | orchestrator | 2025-08-29 15:15:11.568542 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:15:11.568549 | orchestrator | Friday 29 August 2025 15:08:56 +0000 (0:00:00.327) 0:02:46.305 ********* 2025-08-29 15:15:11.568556 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.568564 | orchestrator | 2025-08-29 15:15:11.568572 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 15:15:11.568584 | orchestrator | Friday 29 August 2025 15:08:56 +0000 (0:00:00.582) 0:02:46.887 ********* 2025-08-29 15:15:11.568591 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 15:15:11.568598 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 15:15:11.568606 | orchestrator | 2025-08-29 15:15:11.568613 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 15:15:11.568621 | orchestrator | Friday 29 August 2025 15:09:00 +0000 (0:00:03.492) 0:02:50.380 ********* 2025-08-29 15:15:11.568628 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 15:15:11.568637 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 15:15:11.568642 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 15:15:11.568646 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 15:15:11.568651 | orchestrator | 2025-08-29 15:15:11.568656 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 15:15:11.568660 | orchestrator | Friday 29 August 2025 15:09:07 +0000 (0:00:06.791) 0:02:57.171 ********* 2025-08-29 15:15:11.568665 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:15:11.568669 | orchestrator | 2025-08-29 15:15:11.568674 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 15:15:11.568678 | orchestrator | Friday 29 August 2025 15:09:11 +0000 (0:00:04.120) 0:03:01.291 ********* 2025-08-29 15:15:11.568683 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:15:11.568687 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 15:15:11.568692 | orchestrator | 2025-08-29 15:15:11.568696 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 15:15:11.568700 | orchestrator | Friday 29 August 2025 15:09:15 +0000 (0:00:03.808) 0:03:05.100 ********* 2025-08-29 15:15:11.568705 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:15:11.568709 | orchestrator | 2025-08-29 15:15:11.568714 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 15:15:11.568718 | orchestrator | Friday 29 August 2025 15:09:18 +0000 (0:00:03.212) 0:03:08.313 ********* 2025-08-29 15:15:11.568723 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 15:15:11.568727 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 15:15:11.568732 | orchestrator | 2025-08-29 15:15:11.568736 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:15:11.568746 | orchestrator | Friday 29 August 2025 15:09:26 +0000 (0:00:08.064) 0:03:16.377 ********* 2025-08-29 15:15:11.568756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.568783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.568791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.568803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.568809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.568818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.568827 | orchestrator | 2025-08-29 15:15:11.568832 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 15:15:11.568837 | orchestrator | Friday 29 August 2025 15:09:28 +0000 (0:00:01.609) 0:03:17.987 ********* 2025-08-29 15:15:11.568841 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568846 | orchestrator | 2025-08-29 15:15:11.568850 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 15:15:11.568855 | orchestrator | Friday 29 August 2025 15:09:28 +0000 (0:00:00.155) 0:03:18.143 ********* 2025-08-29 15:15:11.568859 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568864 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568873 | orchestrator | 2025-08-29 15:15:11.568877 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 15:15:11.568882 | orchestrator | Friday 29 August 2025 15:09:29 +0000 (0:00:00.920) 0:03:19.063 ********* 2025-08-29 15:15:11.568886 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:15:11.568891 | orchestrator | 2025-08-29 15:15:11.568895 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 15:15:11.568900 | orchestrator | Friday 29 August 2025 15:09:29 +0000 (0:00:00.840) 0:03:19.904 ********* 2025-08-29 15:15:11.568904 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.568909 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.568913 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.568918 | orchestrator | 2025-08-29 15:15:11.568922 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:15:11.568927 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:00.387) 0:03:20.292 ********* 2025-08-29 15:15:11.568935 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.568942 | orchestrator | 2025-08-29 15:15:11.568949 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:15:11.568956 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:00.576) 0:03:20.868 ********* 2025-08-29 15:15:11.568969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.568977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.568993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569041 | orchestrator | 2025-08-29 15:15:11.569049 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:15:11.569057 | orchestrator | Friday 29 August 2025 15:09:34 +0000 (0:00:03.356) 0:03:24.225 ********* 2025-08-29 15:15:11.569097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569109 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.569114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569133 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.569141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569157 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.569165 | orchestrator | 2025-08-29 15:15:11.569172 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:15:11.569178 | orchestrator | Friday 29 August 2025 15:09:35 +0000 (0:00:01.323) 0:03:25.549 ********* 2025-08-29 15:15:11.569186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569208 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.569627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.569682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569707 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.569714 | orchestrator | 2025-08-29 15:15:11.569722 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 15:15:11.569731 | orchestrator | Friday 29 August 2025 15:09:36 +0000 (0:00:01.211) 0:03:26.760 ********* 2025-08-29 15:15:11.569747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569811 | orchestrator | 2025-08-29 15:15:11.569818 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 15:15:11.569829 | orchestrator | Friday 29 August 2025 15:09:40 +0000 (0:00:03.790) 0:03:30.551 ********* 2025-08-29 15:15:11.569841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.569884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.569908 | orchestrator | 2025-08-29 15:15:11.569915 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 15:15:11.569929 | orchestrator | Friday 29 August 2025 15:09:49 +0000 (0:00:08.451) 0:03:39.003 ********* 2025-08-29 15:15:11.569942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569958 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.569970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.569978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.569987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:15:11.570005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.570014 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.570118 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.570127 | orchestrator | 2025-08-29 15:15:11.570134 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 15:15:11.570142 | orchestrator | Friday 29 August 2025 15:09:50 +0000 (0:00:01.041) 0:03:40.045 ********* 2025-08-29 15:15:11.570149 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.570157 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.570165 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.570172 | orchestrator | 2025-08-29 15:15:11.570180 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 15:15:11.570187 | orchestrator | Friday 29 August 2025 15:09:52 +0000 (0:00:01.897) 0:03:41.942 ********* 2025-08-29 15:15:11.570195 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.570202 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.570210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.570218 | orchestrator | 2025-08-29 15:15:11.570226 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 15:15:11.570233 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:01.105) 0:03:43.047 ********* 2025-08-29 15:15:11.570368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.570388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.570408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:15:11.570420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.570429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.570442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.570450 | orchestrator | 2025-08-29 15:15:11.570458 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:15:11.570465 | orchestrator | Friday 29 August 2025 15:09:55 +0000 (0:00:02.568) 0:03:45.616 ********* 2025-08-29 15:15:11.570473 | orchestrator | 2025-08-29 15:15:11.570481 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:15:11.570488 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.343) 0:03:45.960 ********* 2025-08-29 15:15:11.570496 | orchestrator | 2025-08-29 15:15:11.570506 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:15:11.570513 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.425) 0:03:46.385 ********* 2025-08-29 15:15:11.570521 | orchestrator | 2025-08-29 15:15:11.570529 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 15:15:11.570537 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.413) 0:03:46.798 ********* 2025-08-29 15:15:11.570545 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.570553 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.570561 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.570569 | orchestrator | 2025-08-29 15:15:11.570577 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 15:15:11.570584 | orchestrator | Friday 29 August 2025 15:10:17 +0000 (0:00:20.876) 0:04:07.675 ********* 2025-08-29 15:15:11.570591 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.570598 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.570605 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.570612 | orchestrator | 2025-08-29 15:15:11.570620 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 15:15:11.570627 | orchestrator | 2025-08-29 15:15:11.570634 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:15:11.570641 | orchestrator | Friday 29 August 2025 15:10:23 +0000 (0:00:06.130) 0:04:13.806 ********* 2025-08-29 15:15:11.570649 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.570657 | orchestrator | 2025-08-29 15:15:11.570669 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:15:11.570677 | orchestrator | Friday 29 August 2025 15:10:25 +0000 (0:00:01.903) 0:04:15.710 ********* 2025-08-29 15:15:11.570683 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.570690 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.570698 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.570705 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.570713 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.570720 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.570728 | orchestrator | 2025-08-29 15:15:11.570735 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 15:15:11.570739 | orchestrator | Friday 29 August 2025 15:10:27 +0000 (0:00:01.314) 0:04:17.024 ********* 2025-08-29 15:15:11.570744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.570748 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.570753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.570757 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:15:11.570762 | orchestrator | 2025-08-29 15:15:11.570772 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 15:15:11.570776 | orchestrator | Friday 29 August 2025 15:10:28 +0000 (0:00:01.094) 0:04:18.118 ********* 2025-08-29 15:15:11.570781 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:15:11.570786 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:15:11.570790 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:15:11.570795 | orchestrator | 2025-08-29 15:15:11.570799 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 15:15:11.570804 | orchestrator | Friday 29 August 2025 15:10:29 +0000 (0:00:01.188) 0:04:19.307 ********* 2025-08-29 15:15:11.571199 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:15:11.571210 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:15:11.571220 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:15:11.571225 | orchestrator | 2025-08-29 15:15:11.571230 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 15:15:11.571235 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:01.814) 0:04:21.121 ********* 2025-08-29 15:15:11.571249 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 15:15:11.571254 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.571265 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 15:15:11.571270 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.571274 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 15:15:11.571279 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.571283 | orchestrator | 2025-08-29 15:15:11.571288 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 15:15:11.571292 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:00.553) 0:04:21.674 ********* 2025-08-29 15:15:11.571297 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:15:11.571301 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:15:11.571306 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:15:11.571310 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:15:11.571315 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:15:11.571319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.571324 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:15:11.571328 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:15:11.571333 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.571338 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:15:11.571342 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:15:11.571346 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:15:11.571351 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.571355 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:15:11.571360 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:15:11.571364 | orchestrator | 2025-08-29 15:15:11.571369 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 15:15:11.571373 | orchestrator | Friday 29 August 2025 15:10:33 +0000 (0:00:01.627) 0:04:23.301 ********* 2025-08-29 15:15:11.571378 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.571382 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.571387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.571391 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.571396 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.571407 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.571412 | orchestrator | 2025-08-29 15:15:11.571416 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 15:15:11.571421 | orchestrator | Friday 29 August 2025 15:10:34 +0000 (0:00:01.597) 0:04:24.898 ********* 2025-08-29 15:15:11.571425 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.571430 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.571434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.571439 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.571443 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.571447 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.571452 | orchestrator | 2025-08-29 15:15:11.571456 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:15:11.571461 | orchestrator | Friday 29 August 2025 15:10:37 +0000 (0:00:02.881) 0:04:27.780 ********* 2025-08-29 15:15:11.571489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571614 | orchestrator | 2025-08-29 15:15:11.571619 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:15:11.571624 | orchestrator | Friday 29 August 2025 15:10:41 +0000 (0:00:03.306) 0:04:31.087 ********* 2025-08-29 15:15:11.571629 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:11.571638 | orchestrator | 2025-08-29 15:15:11.571642 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:15:11.571647 | orchestrator | Friday 29 August 2025 15:10:42 +0000 (0:00:01.627) 0:04:32.714 ********* 2025-08-29 15:15:11.571652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.571783 | orchestrator | 2025-08-29 15:15:11.571788 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:15:11.571793 | orchestrator | Friday 29 August 2025 15:10:47 +0000 (0:00:05.047) 0:04:37.762 ********* 2025-08-29 15:15:11.571800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.571810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.571815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.571820 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.571838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.571844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.571853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.571858 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.571868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.571874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.571879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.571884 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.571902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.571908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.571914 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.571921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.571931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.571935 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.571940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.571945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.571950 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.571954 | orchestrator | 2025-08-29 15:15:11.571959 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:15:11.571964 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:02.410) 0:04:40.173 ********* 2025-08-29 15:15:11.571981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.571987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.571998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.572003 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.572008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.572013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.572030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.572036 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.572041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.572055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.572060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.572115 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.572121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.572126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.572131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.572150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.572156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.572165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.572173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.572178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.572182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.572187 | orchestrator | 2025-08-29 15:15:11.572191 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:15:11.572196 | orchestrator | Friday 29 August 2025 15:10:54 +0000 (0:00:03.945) 0:04:44.118 ********* 2025-08-29 15:15:11.572201 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.572208 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.572215 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.572223 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:15:11.572230 | orchestrator | 2025-08-29 15:15:11.572240 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 15:15:11.572250 | orchestrator | Friday 29 August 2025 15:10:56 +0000 (0:00:02.082) 0:04:46.201 ********* 2025-08-29 15:15:11.572259 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:15:11.572265 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:15:11.572273 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:15:11.572279 | orchestrator | 2025-08-29 15:15:11.572286 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 15:15:11.572293 | orchestrator | Friday 29 August 2025 15:10:58 +0000 (0:00:01.905) 0:04:48.106 ********* 2025-08-29 15:15:11.572300 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:15:11.572307 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:15:11.572313 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:15:11.572320 | orchestrator | 2025-08-29 15:15:11.572327 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 15:15:11.572334 | orchestrator | Friday 29 August 2025 15:10:59 +0000 (0:00:01.590) 0:04:49.696 ********* 2025-08-29 15:15:11.572341 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:15:11.572348 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:15:11.572355 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:15:11.572362 | orchestrator | 2025-08-29 15:15:11.572369 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 15:15:11.572376 | orchestrator | Friday 29 August 2025 15:11:00 +0000 (0:00:00.601) 0:04:50.298 ********* 2025-08-29 15:15:11.572383 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:15:11.572391 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:15:11.572406 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:15:11.572411 | orchestrator | 2025-08-29 15:15:11.572415 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 15:15:11.572420 | orchestrator | Friday 29 August 2025 15:11:01 +0000 (0:00:01.013) 0:04:51.311 ********* 2025-08-29 15:15:11.572425 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:15:11.572449 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:15:11.572455 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:15:11.572460 | orchestrator | 2025-08-29 15:15:11.572464 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 15:15:11.572469 | orchestrator | Friday 29 August 2025 15:11:02 +0000 (0:00:01.284) 0:04:52.596 ********* 2025-08-29 15:15:11.572473 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:15:11.572478 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:15:11.572482 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:15:11.572487 | orchestrator | 2025-08-29 15:15:11.572491 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 15:15:11.572496 | orchestrator | Friday 29 August 2025 15:11:04 +0000 (0:00:01.393) 0:04:53.989 ********* 2025-08-29 15:15:11.572500 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:15:11.572505 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:15:11.572509 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:15:11.572514 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 15:15:11.572518 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 15:15:11.572523 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 15:15:11.572527 | orchestrator | 2025-08-29 15:15:11.572532 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 15:15:11.572536 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:04.114) 0:04:58.104 ********* 2025-08-29 15:15:11.572541 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.572546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.572550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.572555 | orchestrator | 2025-08-29 15:15:11.572565 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 15:15:11.572570 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:00.350) 0:04:58.455 ********* 2025-08-29 15:15:11.572574 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.572579 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.572583 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.572588 | orchestrator | 2025-08-29 15:15:11.572593 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 15:15:11.572597 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:00.326) 0:04:58.781 ********* 2025-08-29 15:15:11.572602 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.572606 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.572611 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.572615 | orchestrator | 2025-08-29 15:15:11.572620 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 15:15:11.572624 | orchestrator | Friday 29 August 2025 15:11:11 +0000 (0:00:02.918) 0:05:01.699 ********* 2025-08-29 15:15:11.572630 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:15:11.572635 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:15:11.572639 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:15:11.572644 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:15:11.572654 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:15:11.572658 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:15:11.572662 | orchestrator | 2025-08-29 15:15:11.572666 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 15:15:11.572670 | orchestrator | Friday 29 August 2025 15:11:15 +0000 (0:00:04.064) 0:05:05.764 ********* 2025-08-29 15:15:11.572675 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:15:11.572679 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:15:11.572683 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:15:11.572687 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:15:11.572691 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.572695 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:15:11.572699 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.572703 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:15:11.572707 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.572711 | orchestrator | 2025-08-29 15:15:11.572715 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 15:15:11.572720 | orchestrator | Friday 29 August 2025 15:11:19 +0000 (0:00:03.423) 0:05:09.187 ********* 2025-08-29 15:15:11.572724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.572728 | orchestrator | 2025-08-29 15:15:11.572732 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 15:15:11.572736 | orchestrator | Friday 29 August 2025 15:11:19 +0000 (0:00:00.125) 0:05:09.313 ********* 2025-08-29 15:15:11.572740 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.572744 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.572748 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.572752 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.572756 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.572760 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.572764 | orchestrator | 2025-08-29 15:15:11.572768 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 15:15:11.572785 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:00.838) 0:05:10.152 ********* 2025-08-29 15:15:11.572789 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:15:11.572794 | orchestrator | 2025-08-29 15:15:11.572798 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 15:15:11.572802 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:00.674) 0:05:10.826 ********* 2025-08-29 15:15:11.572806 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.572810 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.572814 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.572818 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.572822 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.572826 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.572830 | orchestrator | 2025-08-29 15:15:11.572834 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 15:15:11.572838 | orchestrator | Friday 29 August 2025 15:11:21 +0000 (0:00:00.626) 0:05:11.453 ********* 2025-08-29 15:15:11.572846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572865 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572940 | orchestrator | 2025-08-29 15:15:11.572944 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 15:15:11.572948 | orchestrator | Friday 29 August 2025 15:11:26 +0000 (0:00:04.868) 0:05:16.321 ********* 2025-08-29 15:15:11.572953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.572961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.572968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.572977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.572981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.572986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.572993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.572998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573059 | orchestrator | 2025-08-29 15:15:11.573078 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 15:15:11.573083 | orchestrator | Friday 29 August 2025 15:11:33 +0000 (0:00:06.751) 0:05:23.073 ********* 2025-08-29 15:15:11.573087 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573091 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573095 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573099 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573107 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573111 | orchestrator | 2025-08-29 15:15:11.573115 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 15:15:11.573119 | orchestrator | Friday 29 August 2025 15:11:34 +0000 (0:00:01.788) 0:05:24.862 ********* 2025-08-29 15:15:11.573123 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:15:11.573127 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:15:11.573132 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:15:11.573136 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:15:11.573140 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:15:11.573144 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:15:11.573148 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:15:11.573152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573156 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:15:11.573160 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573164 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:15:11.573168 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573172 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:15:11.573177 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:15:11.573181 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:15:11.573185 | orchestrator | 2025-08-29 15:15:11.573189 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 15:15:11.573193 | orchestrator | Friday 29 August 2025 15:11:40 +0000 (0:00:05.769) 0:05:30.631 ********* 2025-08-29 15:15:11.573197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573201 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573209 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573213 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573217 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573221 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573225 | orchestrator | 2025-08-29 15:15:11.573229 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 15:15:11.573233 | orchestrator | Friday 29 August 2025 15:11:41 +0000 (0:00:00.851) 0:05:31.483 ********* 2025-08-29 15:15:11.573237 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:15:11.573242 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:15:11.573249 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:15:11.573253 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:15:11.573257 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:15:11.573261 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:15:11.573265 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:15:11.573269 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:15:11.573273 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:15:11.573277 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:15:11.573281 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573286 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:15:11.573290 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573294 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:15:11.573298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573306 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:15:11.573310 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:15:11.573314 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:15:11.573318 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:15:11.573322 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:15:11.573326 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:15:11.573331 | orchestrator | 2025-08-29 15:15:11.573335 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 15:15:11.573339 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:05.373) 0:05:36.857 ********* 2025-08-29 15:15:11.573343 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:15:11.573347 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:15:11.573351 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:15:11.573355 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:15:11.573363 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:15:11.573367 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:15:11.573371 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:15:11.573376 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:15:11.573380 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:15:11.573384 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:15:11.573388 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:15:11.573392 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:15:11.573400 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:15:11.573404 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:15:11.573408 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:15:11.573412 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573416 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:15:11.573420 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:15:11.573424 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573428 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:15:11.573433 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:15:11.573437 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:15:11.573443 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:15:11.573447 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:15:11.573451 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:15:11.573456 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:15:11.573460 | orchestrator | 2025-08-29 15:15:11.573464 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 15:15:11.573468 | orchestrator | Friday 29 August 2025 15:11:54 +0000 (0:00:07.597) 0:05:44.454 ********* 2025-08-29 15:15:11.573472 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573484 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573488 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573492 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573496 | orchestrator | 2025-08-29 15:15:11.573500 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 15:15:11.573505 | orchestrator | Friday 29 August 2025 15:11:55 +0000 (0:00:00.567) 0:05:45.022 ********* 2025-08-29 15:15:11.573509 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573513 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573517 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573529 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573533 | orchestrator | 2025-08-29 15:15:11.573537 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 15:15:11.573546 | orchestrator | Friday 29 August 2025 15:11:55 +0000 (0:00:00.747) 0:05:45.769 ********* 2025-08-29 15:15:11.573550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573556 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573560 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573564 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.573569 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.573573 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.573577 | orchestrator | 2025-08-29 15:15:11.573581 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 15:15:11.573585 | orchestrator | Friday 29 August 2025 15:11:58 +0000 (0:00:02.415) 0:05:48.185 ********* 2025-08-29 15:15:11.573589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.573594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.573598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.573603 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.573623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.573627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.573631 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.573640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.573644 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:15:11.573657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:15:11.573668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.573672 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.573681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.573685 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:15:11.573697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:11.573701 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573705 | orchestrator | 2025-08-29 15:15:11.573709 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 15:15:11.573717 | orchestrator | Friday 29 August 2025 15:11:59 +0000 (0:00:01.588) 0:05:49.774 ********* 2025-08-29 15:15:11.573721 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:15:11.573726 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:15:11.573730 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573734 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:15:11.573738 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:15:11.573742 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573746 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:15:11.573750 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:15:11.573755 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:15:11.573759 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:15:11.573763 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573767 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:15:11.573771 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:15:11.573775 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573783 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:15:11.573787 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:15:11.573791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573798 | orchestrator | 2025-08-29 15:15:11.573802 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 15:15:11.573806 | orchestrator | Friday 29 August 2025 15:12:00 +0000 (0:00:00.635) 0:05:50.409 ********* 2025-08-29 15:15:11.573811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:11.573899 | orchestrator | 2025-08-29 15:15:11.573921 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:15:11.573925 | orchestrator | Friday 29 August 2025 15:12:03 +0000 (0:00:03.117) 0:05:53.527 ********* 2025-08-29 15:15:11.573930 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.573934 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.573938 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.573944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.573949 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.573953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.573957 | orchestrator | 2025-08-29 15:15:11.573961 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:15:11.573965 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:00.610) 0:05:54.137 ********* 2025-08-29 15:15:11.573969 | orchestrator | 2025-08-29 15:15:11.573973 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:15:11.573977 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:00.132) 0:05:54.269 ********* 2025-08-29 15:15:11.573982 | orchestrator | 2025-08-29 15:15:11.573986 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:15:11.573990 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:00.134) 0:05:54.404 ********* 2025-08-29 15:15:11.573994 | orchestrator | 2025-08-29 15:15:11.573998 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:15:11.574002 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:00.293) 0:05:54.697 ********* 2025-08-29 15:15:11.574006 | orchestrator | 2025-08-29 15:15:11.574010 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:15:11.574047 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:00.150) 0:05:54.848 ********* 2025-08-29 15:15:11.574053 | orchestrator | 2025-08-29 15:15:11.574057 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:15:11.574061 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:00.135) 0:05:54.983 ********* 2025-08-29 15:15:11.574095 | orchestrator | 2025-08-29 15:15:11.574100 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 15:15:11.574104 | orchestrator | Friday 29 August 2025 15:12:05 +0000 (0:00:00.133) 0:05:55.117 ********* 2025-08-29 15:15:11.574109 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.574113 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.574123 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.574127 | orchestrator | 2025-08-29 15:15:11.574131 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 15:15:11.574135 | orchestrator | Friday 29 August 2025 15:12:20 +0000 (0:00:15.053) 0:06:10.170 ********* 2025-08-29 15:15:11.574140 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.574144 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.574148 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.574153 | orchestrator | 2025-08-29 15:15:11.574157 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 15:15:11.574161 | orchestrator | Friday 29 August 2025 15:12:36 +0000 (0:00:16.365) 0:06:26.535 ********* 2025-08-29 15:15:11.574166 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.574170 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.574174 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.574178 | orchestrator | 2025-08-29 15:15:11.574188 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 15:15:11.574192 | orchestrator | Friday 29 August 2025 15:12:56 +0000 (0:00:20.325) 0:06:46.860 ********* 2025-08-29 15:15:11.574196 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.574200 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.574205 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.574209 | orchestrator | 2025-08-29 15:15:11.574213 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 15:15:11.574217 | orchestrator | Friday 29 August 2025 15:13:35 +0000 (0:00:38.774) 0:07:25.635 ********* 2025-08-29 15:15:11.574222 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.574226 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.574230 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.574234 | orchestrator | 2025-08-29 15:15:11.574239 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 15:15:11.574243 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:00.906) 0:07:26.541 ********* 2025-08-29 15:15:11.574247 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.574251 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.574255 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.574260 | orchestrator | 2025-08-29 15:15:11.574264 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 15:15:11.574268 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:01.032) 0:07:27.574 ********* 2025-08-29 15:15:11.574272 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:15:11.574277 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:15:11.574281 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:15:11.574285 | orchestrator | 2025-08-29 15:15:11.574289 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 15:15:11.574293 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:21.373) 0:07:48.947 ********* 2025-08-29 15:15:11.574298 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.574302 | orchestrator | 2025-08-29 15:15:11.574306 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 15:15:11.574310 | orchestrator | Friday 29 August 2025 15:13:59 +0000 (0:00:00.109) 0:07:49.057 ********* 2025-08-29 15:15:11.574315 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574319 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.574323 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.574327 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574331 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574336 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 15:15:11.574340 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:15:11.574344 | orchestrator | 2025-08-29 15:15:11.574349 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 15:15:11.574353 | orchestrator | Friday 29 August 2025 15:14:20 +0000 (0:00:21.841) 0:08:10.898 ********* 2025-08-29 15:15:11.574357 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.574361 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.574366 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.574370 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574378 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574386 | orchestrator | 2025-08-29 15:15:11.574391 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 15:15:11.574395 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:09.552) 0:08:20.451 ********* 2025-08-29 15:15:11.574399 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.574403 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.574407 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574412 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574420 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574424 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-08-29 15:15:11.574428 | orchestrator | 2025-08-29 15:15:11.574432 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:15:11.574436 | orchestrator | Friday 29 August 2025 15:14:34 +0000 (0:00:04.014) 0:08:24.466 ********* 2025-08-29 15:15:11.574441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:15:11.574445 | orchestrator | 2025-08-29 15:15:11.574449 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:15:11.574453 | orchestrator | Friday 29 August 2025 15:14:48 +0000 (0:00:14.280) 0:08:38.746 ********* 2025-08-29 15:15:11.574458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:15:11.574462 | orchestrator | 2025-08-29 15:15:11.574466 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 15:15:11.574470 | orchestrator | Friday 29 August 2025 15:14:50 +0000 (0:00:01.384) 0:08:40.130 ********* 2025-08-29 15:15:11.574474 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.574479 | orchestrator | 2025-08-29 15:15:11.574483 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 15:15:11.574490 | orchestrator | Friday 29 August 2025 15:14:51 +0000 (0:00:01.348) 0:08:41.479 ********* 2025-08-29 15:15:11.574494 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:15:11.574498 | orchestrator | 2025-08-29 15:15:11.574502 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 15:15:11.574506 | orchestrator | Friday 29 August 2025 15:15:02 +0000 (0:00:11.371) 0:08:52.850 ********* 2025-08-29 15:15:11.574511 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:15:11.574515 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:15:11.574519 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:11.574524 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:15:11.574528 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:11.574532 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:11.574536 | orchestrator | 2025-08-29 15:15:11.574541 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 15:15:11.574545 | orchestrator | 2025-08-29 15:15:11.574549 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 15:15:11.574554 | orchestrator | Friday 29 August 2025 15:15:04 +0000 (0:00:01.838) 0:08:54.688 ********* 2025-08-29 15:15:11.574558 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:11.574562 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:11.574566 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:11.574570 | orchestrator | 2025-08-29 15:15:11.574575 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 15:15:11.574579 | orchestrator | 2025-08-29 15:15:11.574583 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 15:15:11.574588 | orchestrator | Friday 29 August 2025 15:15:05 +0000 (0:00:01.005) 0:08:55.694 ********* 2025-08-29 15:15:11.574592 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574596 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574600 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574605 | orchestrator | 2025-08-29 15:15:11.574609 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 15:15:11.574613 | orchestrator | 2025-08-29 15:15:11.574617 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 15:15:11.574622 | orchestrator | Friday 29 August 2025 15:15:06 +0000 (0:00:00.772) 0:08:56.467 ********* 2025-08-29 15:15:11.574626 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 15:15:11.574630 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:15:11.574634 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:15:11.574638 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 15:15:11.574647 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 15:15:11.574651 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 15:15:11.574656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:15:11.574660 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 15:15:11.574664 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:15:11.574669 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:15:11.574673 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 15:15:11.574677 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 15:15:11.574681 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 15:15:11.574686 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:15:11.574690 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 15:15:11.574694 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:15:11.574698 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:15:11.574702 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 15:15:11.574707 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 15:15:11.574711 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 15:15:11.574715 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 15:15:11.574722 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:15:11.574726 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:15:11.574730 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 15:15:11.574734 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 15:15:11.574739 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 15:15:11.574743 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:15:11.574747 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 15:15:11.574751 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:15:11.574756 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:15:11.574760 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 15:15:11.574764 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 15:15:11.574768 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 15:15:11.574772 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574777 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574781 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 15:15:11.574785 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:15:11.574789 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:15:11.574793 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 15:15:11.574798 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 15:15:11.574802 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 15:15:11.574806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574810 | orchestrator | 2025-08-29 15:15:11.574814 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 15:15:11.574819 | orchestrator | 2025-08-29 15:15:11.574836 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 15:15:11.574840 | orchestrator | Friday 29 August 2025 15:15:07 +0000 (0:00:01.384) 0:08:57.851 ********* 2025-08-29 15:15:11.574845 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 15:15:11.574849 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 15:15:11.574853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574862 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 15:15:11.574867 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 15:15:11.574871 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574875 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 15:15:11.574879 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 15:15:11.574883 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574888 | orchestrator | 2025-08-29 15:15:11.574892 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 15:15:11.574896 | orchestrator | 2025-08-29 15:15:11.574901 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 15:15:11.574905 | orchestrator | Friday 29 August 2025 15:15:08 +0000 (0:00:00.564) 0:08:58.416 ********* 2025-08-29 15:15:11.574909 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574914 | orchestrator | 2025-08-29 15:15:11.574918 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 15:15:11.574922 | orchestrator | 2025-08-29 15:15:11.574926 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 15:15:11.574930 | orchestrator | Friday 29 August 2025 15:15:09 +0000 (0:00:00.919) 0:08:59.335 ********* 2025-08-29 15:15:11.574935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:11.574939 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:11.574943 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:11.574947 | orchestrator | 2025-08-29 15:15:11.574951 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:15:11.574956 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:11.574961 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 15:15:11.574965 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:15:11.574969 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:15:11.574974 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 15:15:11.574978 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 15:15:11.574982 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 15:15:11.574986 | orchestrator | 2025-08-29 15:15:11.574991 | orchestrator | 2025-08-29 15:15:11.574995 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:15:11.574999 | orchestrator | Friday 29 August 2025 15:15:09 +0000 (0:00:00.477) 0:08:59.813 ********* 2025-08-29 15:15:11.575003 | orchestrator | =============================================================================== 2025-08-29 15:15:11.575010 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.77s 2025-08-29 15:15:11.575015 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.00s 2025-08-29 15:15:11.575019 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.84s 2025-08-29 15:15:11.575023 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.37s 2025-08-29 15:15:11.575027 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.88s 2025-08-29 15:15:11.575031 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.33s 2025-08-29 15:15:11.575036 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.00s 2025-08-29 15:15:11.575044 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.50s 2025-08-29 15:15:11.575049 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.37s 2025-08-29 15:15:11.575053 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.72s 2025-08-29 15:15:11.575057 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 15.05s 2025-08-29 15:15:11.575061 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.77s 2025-08-29 15:15:11.575104 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.28s 2025-08-29 15:15:11.575108 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.73s 2025-08-29 15:15:11.575112 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.37s 2025-08-29 15:15:11.575116 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.31s 2025-08-29 15:15:11.575120 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.55s 2025-08-29 15:15:11.575128 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.46s 2025-08-29 15:15:11.575132 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.45s 2025-08-29 15:15:11.575136 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.06s 2025-08-29 15:15:11.575140 | orchestrator | 2025-08-29 15:15:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:14.618711 | orchestrator | 2025-08-29 15:15:14 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:14.621984 | orchestrator | 2025-08-29 15:15:14 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:14.622280 | orchestrator | 2025-08-29 15:15:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:17.671998 | orchestrator | 2025-08-29 15:15:17 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:17.673807 | orchestrator | 2025-08-29 15:15:17 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:17.673856 | orchestrator | 2025-08-29 15:15:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:20.723902 | orchestrator | 2025-08-29 15:15:20 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:20.727858 | orchestrator | 2025-08-29 15:15:20 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:20.727940 | orchestrator | 2025-08-29 15:15:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:23.780093 | orchestrator | 2025-08-29 15:15:23 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:23.785041 | orchestrator | 2025-08-29 15:15:23 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:23.785567 | orchestrator | 2025-08-29 15:15:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:26.832484 | orchestrator | 2025-08-29 15:15:26 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:26.834472 | orchestrator | 2025-08-29 15:15:26 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:26.834539 | orchestrator | 2025-08-29 15:15:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:29.882172 | orchestrator | 2025-08-29 15:15:29 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:29.883502 | orchestrator | 2025-08-29 15:15:29 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:29.883561 | orchestrator | 2025-08-29 15:15:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:32.928269 | orchestrator | 2025-08-29 15:15:32 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:32.930776 | orchestrator | 2025-08-29 15:15:32 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state STARTED 2025-08-29 15:15:32.930888 | orchestrator | 2025-08-29 15:15:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:35.975784 | orchestrator | 2025-08-29 15:15:35 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:35.980836 | orchestrator | 2025-08-29 15:15:35 | INFO  | Task b46b7839-63be-46e7-b658-1e300412f8ad is in state SUCCESS 2025-08-29 15:15:35.983677 | orchestrator | 2025-08-29 15:15:35.983747 | orchestrator | 2025-08-29 15:15:35.983757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:15:35.983765 | orchestrator | 2025-08-29 15:15:35.983772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:15:35.983780 | orchestrator | Friday 29 August 2025 15:13:13 +0000 (0:00:00.249) 0:00:00.250 ********* 2025-08-29 15:15:35.983866 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:35.983876 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:35.983882 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:35.983889 | orchestrator | 2025-08-29 15:15:35.983896 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:15:35.984192 | orchestrator | Friday 29 August 2025 15:13:13 +0000 (0:00:00.271) 0:00:00.521 ********* 2025-08-29 15:15:35.984202 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 15:15:35.984210 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 15:15:35.984217 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 15:15:35.984224 | orchestrator | 2025-08-29 15:15:35.984231 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 15:15:35.984237 | orchestrator | 2025-08-29 15:15:35.984244 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:15:35.984251 | orchestrator | Friday 29 August 2025 15:13:13 +0000 (0:00:00.357) 0:00:00.879 ********* 2025-08-29 15:15:35.984258 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:35.984266 | orchestrator | 2025-08-29 15:15:35.984273 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 15:15:35.984293 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:00.502) 0:00:01.381 ********* 2025-08-29 15:15:35.984303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.984314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.984340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.984348 | orchestrator | 2025-08-29 15:15:35.984355 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 15:15:35.984401 | orchestrator | Friday 29 August 2025 15:13:14 +0000 (0:00:00.694) 0:00:02.076 ********* 2025-08-29 15:15:35.984409 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 15:15:35.984416 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 15:15:35.984423 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:15:35.984430 | orchestrator | 2025-08-29 15:15:35.985096 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:15:35.985116 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:00.831) 0:00:02.908 ********* 2025-08-29 15:15:35.985123 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:35.985130 | orchestrator | 2025-08-29 15:15:35.985137 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 15:15:35.985144 | orchestrator | Friday 29 August 2025 15:13:16 +0000 (0:00:00.676) 0:00:03.584 ********* 2025-08-29 15:15:35.985188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985231 | orchestrator | 2025-08-29 15:15:35.985247 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 15:15:35.985265 | orchestrator | Friday 29 August 2025 15:13:17 +0000 (0:00:01.517) 0:00:05.102 ********* 2025-08-29 15:15:35.985272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:15:35.985279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:15:35.985286 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.985294 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.985325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:15:35.985333 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.985340 | orchestrator | 2025-08-29 15:15:35.985347 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 15:15:35.985353 | orchestrator | Friday 29 August 2025 15:13:18 +0000 (0:00:00.406) 0:00:05.509 ********* 2025-08-29 15:15:35.985360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:15:35.985370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:15:35.985382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.985388 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.985395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:15:35.985402 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.985409 | orchestrator | 2025-08-29 15:15:35.985416 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 15:15:35.985422 | orchestrator | Friday 29 August 2025 15:13:19 +0000 (0:00:00.939) 0:00:06.448 ********* 2025-08-29 15:15:35.985429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985472 | orchestrator | 2025-08-29 15:15:35.985478 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 15:15:35.985485 | orchestrator | Friday 29 August 2025 15:13:20 +0000 (0:00:01.469) 0:00:07.918 ********* 2025-08-29 15:15:35.985495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.985520 | orchestrator | 2025-08-29 15:15:35.985527 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 15:15:35.985534 | orchestrator | Friday 29 August 2025 15:13:22 +0000 (0:00:01.478) 0:00:09.396 ********* 2025-08-29 15:15:35.985541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.985547 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.985554 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.985560 | orchestrator | 2025-08-29 15:15:35.985567 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 15:15:35.985574 | orchestrator | Friday 29 August 2025 15:13:22 +0000 (0:00:00.504) 0:00:09.901 ********* 2025-08-29 15:15:35.985580 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:15:35.985587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:15:35.985594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:15:35.985601 | orchestrator | 2025-08-29 15:15:35.985607 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 15:15:35.985614 | orchestrator | Friday 29 August 2025 15:13:23 +0000 (0:00:01.195) 0:00:11.096 ********* 2025-08-29 15:15:35.985620 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:15:35.985646 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:15:35.985655 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:15:35.985662 | orchestrator | 2025-08-29 15:15:35.985670 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 15:15:35.985677 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:01.203) 0:00:12.300 ********* 2025-08-29 15:15:35.985684 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:15:35.985692 | orchestrator | 2025-08-29 15:15:35.985699 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 15:15:35.985707 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.764) 0:00:13.065 ********* 2025-08-29 15:15:35.985721 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 15:15:35.985729 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 15:15:35.985737 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:35.985745 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:35.985752 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:35.985760 | orchestrator | 2025-08-29 15:15:35.985767 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 15:15:35.985774 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.763) 0:00:13.829 ********* 2025-08-29 15:15:35.985781 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.985789 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.985797 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.985804 | orchestrator | 2025-08-29 15:15:35.985812 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 15:15:35.985822 | orchestrator | Friday 29 August 2025 15:13:27 +0000 (0:00:00.517) 0:00:14.346 ********* 2025-08-29 15:15:35.985831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1313090, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6255777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1313090, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6255777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1313090, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6255777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1313210, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.645714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1313210, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.645714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1313210, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.645714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313123, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.630714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313123, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.630714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1313123, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.630714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1313213, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.647714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1313213, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.647714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1313213, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.647714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1313149, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.636714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.985994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1313149, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.636714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1313149, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.636714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1313197, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6446831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1313197, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6446831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1313197, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6446831, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313088, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6237137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313088, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6237137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1313088, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6237137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313104, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6268137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313104, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6268137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1313104, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6268137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313126, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6317139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313126, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6317139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1313126, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6317139, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1313170, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6404257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1313170, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6404257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1313170, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6404257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1313207, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6456177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1313207, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6456177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1313207, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6456177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313111, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.628921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313111, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.628921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1313111, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.628921, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1313192, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6438153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1313192, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6438153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1313192, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6438153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1313156, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6387138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1313156, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6387138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1313156, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6387138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1313139, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6352472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1313139, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6352472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1313139, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6352472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1313135, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.633714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1313135, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.633714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1313135, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.633714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1313179, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.64292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1313179, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.64292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1313179, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.64292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1313128, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6331606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1313128, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6331606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1313128, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6331606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1313203, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6453216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1313203, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6453216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1313203, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6453216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313396, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6838913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313396, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6838913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1313396, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6838913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1313261, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.660924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1313261, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.660924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1313261, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.660924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1313235, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6507142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1313235, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6507142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1313235, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6507142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1313308, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.664919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1313308, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.664919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1313308, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.664919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1313225, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6488276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1313225, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6488276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1313225, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6488276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313363, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6757143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313363, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6757143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1313363, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6757143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313315, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6719012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313315, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6719012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1313315, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6719012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1313370, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6767738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1313370, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6767738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1313370, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6767738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313393, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6827142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313393, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6827142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1313393, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6827142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1313356, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6752522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1313356, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6752522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1313356, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6752522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313300, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6629024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313300, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6629024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1313300, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6629024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1313251, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6549144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1313251, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6549144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1313251, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6549144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313295, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.661714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313295, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.661714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1313295, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.661714, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1313239, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6539156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1313239, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6539156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1313239, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6539156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1313305, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6637142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1313305, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6637142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1313305, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6637142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313385, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6817143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313385, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6817143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1313385, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6817143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313379, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6794124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313379, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6794124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1313379, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6794124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.986998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1313229, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.649406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1313229, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.649406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1313229, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.649406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1313231, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6503565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1313231, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6503565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1313231, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6503565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313351, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6743271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313351, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6743271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1313351, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6743271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1313374, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6778781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1313374, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6778781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1313374, 'dev': 115, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756477365.6778781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:15:35.987147 | orchestrator | 2025-08-29 15:15:35.987154 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 15:15:35.987162 | orchestrator | Friday 29 August 2025 15:14:05 +0000 (0:00:38.643) 0:00:52.989 ********* 2025-08-29 15:15:35.987168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.987176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.987183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:15:35.987190 | orchestrator | 2025-08-29 15:15:35.987196 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 15:15:35.987203 | orchestrator | Friday 29 August 2025 15:14:06 +0000 (0:00:00.887) 0:00:53.877 ********* 2025-08-29 15:15:35.987210 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:35.987216 | orchestrator | 2025-08-29 15:15:35.987223 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 15:15:35.987233 | orchestrator | Friday 29 August 2025 15:14:08 +0000 (0:00:02.075) 0:00:55.952 ********* 2025-08-29 15:15:35.987240 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:35.987247 | orchestrator | 2025-08-29 15:15:35.987253 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:15:35.987260 | orchestrator | Friday 29 August 2025 15:14:11 +0000 (0:00:02.338) 0:00:58.291 ********* 2025-08-29 15:15:35.987271 | orchestrator | 2025-08-29 15:15:35.987278 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:15:35.987285 | orchestrator | Friday 29 August 2025 15:14:11 +0000 (0:00:00.244) 0:00:58.535 ********* 2025-08-29 15:15:35.987292 | orchestrator | 2025-08-29 15:15:35.987298 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:15:35.987305 | orchestrator | Friday 29 August 2025 15:14:11 +0000 (0:00:00.081) 0:00:58.617 ********* 2025-08-29 15:15:35.987311 | orchestrator | 2025-08-29 15:15:35.987318 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 15:15:35.987324 | orchestrator | Friday 29 August 2025 15:14:11 +0000 (0:00:00.080) 0:00:58.697 ********* 2025-08-29 15:15:35.987331 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.987338 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.987344 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:35.987351 | orchestrator | 2025-08-29 15:15:35.987357 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 15:15:35.987364 | orchestrator | Friday 29 August 2025 15:14:18 +0000 (0:00:06.981) 0:01:05.679 ********* 2025-08-29 15:15:35.987371 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.987377 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.987384 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 15:15:35.987394 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 15:15:35.987401 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 15:15:35.987407 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:35.987414 | orchestrator | 2025-08-29 15:15:35.987420 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 15:15:35.987427 | orchestrator | Friday 29 August 2025 15:14:57 +0000 (0:00:38.875) 0:01:44.555 ********* 2025-08-29 15:15:35.987434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.987440 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:35.987447 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:35.987454 | orchestrator | 2025-08-29 15:15:35.987461 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 15:15:35.987467 | orchestrator | Friday 29 August 2025 15:15:28 +0000 (0:00:30.697) 0:02:15.252 ********* 2025-08-29 15:15:35.987474 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:35.987481 | orchestrator | 2025-08-29 15:15:35.987487 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 15:15:35.987494 | orchestrator | Friday 29 August 2025 15:15:30 +0000 (0:00:02.431) 0:02:17.683 ********* 2025-08-29 15:15:35.987501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.987507 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:35.987514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:35.987520 | orchestrator | 2025-08-29 15:15:35.987527 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 15:15:35.987533 | orchestrator | Friday 29 August 2025 15:15:31 +0000 (0:00:00.511) 0:02:18.195 ********* 2025-08-29 15:15:35.987541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 15:15:35.987550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 15:15:35.987557 | orchestrator | 2025-08-29 15:15:35.987564 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 15:15:35.987576 | orchestrator | Friday 29 August 2025 15:15:33 +0000 (0:00:02.395) 0:02:20.590 ********* 2025-08-29 15:15:35.987583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:35.987589 | orchestrator | 2025-08-29 15:15:35.987596 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:15:35.987603 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:15:35.987610 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:15:35.987617 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:15:35.987624 | orchestrator | 2025-08-29 15:15:35.987630 | orchestrator | 2025-08-29 15:15:35.987637 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:15:35.987644 | orchestrator | Friday 29 August 2025 15:15:33 +0000 (0:00:00.261) 0:02:20.852 ********* 2025-08-29 15:15:35.987651 | orchestrator | =============================================================================== 2025-08-29 15:15:35.987661 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.88s 2025-08-29 15:15:35.987668 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.64s 2025-08-29 15:15:35.987675 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.70s 2025-08-29 15:15:35.987681 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.98s 2025-08-29 15:15:35.987688 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2025-08-29 15:15:35.987695 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.40s 2025-08-29 15:15:35.987702 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2025-08-29 15:15:35.987708 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.08s 2025-08-29 15:15:35.987715 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.52s 2025-08-29 15:15:35.987722 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.48s 2025-08-29 15:15:35.987728 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.47s 2025-08-29 15:15:35.987735 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.20s 2025-08-29 15:15:35.987742 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2025-08-29 15:15:35.987748 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.94s 2025-08-29 15:15:35.987755 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.89s 2025-08-29 15:15:35.987765 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-08-29 15:15:35.987771 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2025-08-29 15:15:35.987778 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.76s 2025-08-29 15:15:35.987785 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.69s 2025-08-29 15:15:35.987791 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2025-08-29 15:15:35.987798 | orchestrator | 2025-08-29 15:15:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:39.021529 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:39.021617 | orchestrator | 2025-08-29 15:15:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:42.059185 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:42.059288 | orchestrator | 2025-08-29 15:15:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:45.104355 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:45.104444 | orchestrator | 2025-08-29 15:15:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:48.145183 | orchestrator | 2025-08-29 15:15:48 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:48.145287 | orchestrator | 2025-08-29 15:15:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:51.198552 | orchestrator | 2025-08-29 15:15:51 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:51.198643 | orchestrator | 2025-08-29 15:15:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:54.243619 | orchestrator | 2025-08-29 15:15:54 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:54.243719 | orchestrator | 2025-08-29 15:15:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:57.278617 | orchestrator | 2025-08-29 15:15:57 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:15:57.278731 | orchestrator | 2025-08-29 15:15:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:00.309987 | orchestrator | 2025-08-29 15:16:00 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:00.310185 | orchestrator | 2025-08-29 15:16:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:03.352613 | orchestrator | 2025-08-29 15:16:03 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:03.352737 | orchestrator | 2025-08-29 15:16:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:06.398630 | orchestrator | 2025-08-29 15:16:06 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:06.398721 | orchestrator | 2025-08-29 15:16:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:09.440387 | orchestrator | 2025-08-29 15:16:09 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:09.440486 | orchestrator | 2025-08-29 15:16:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:12.491325 | orchestrator | 2025-08-29 15:16:12 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:12.491413 | orchestrator | 2025-08-29 15:16:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:15.542292 | orchestrator | 2025-08-29 15:16:15 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:15.542332 | orchestrator | 2025-08-29 15:16:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:18.609584 | orchestrator | 2025-08-29 15:16:18 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:18.609648 | orchestrator | 2025-08-29 15:16:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:21.652400 | orchestrator | 2025-08-29 15:16:21 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:21.652505 | orchestrator | 2025-08-29 15:16:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:24.702228 | orchestrator | 2025-08-29 15:16:24 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:24.702344 | orchestrator | 2025-08-29 15:16:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:27.740860 | orchestrator | 2025-08-29 15:16:27 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:27.741052 | orchestrator | 2025-08-29 15:16:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:30.786365 | orchestrator | 2025-08-29 15:16:30 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:30.786523 | orchestrator | 2025-08-29 15:16:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:33.832485 | orchestrator | 2025-08-29 15:16:33 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:33.832603 | orchestrator | 2025-08-29 15:16:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:36.875182 | orchestrator | 2025-08-29 15:16:36 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:36.875280 | orchestrator | 2025-08-29 15:16:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:39.912575 | orchestrator | 2025-08-29 15:16:39 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:39.913285 | orchestrator | 2025-08-29 15:16:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:42.955348 | orchestrator | 2025-08-29 15:16:42 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:42.955477 | orchestrator | 2025-08-29 15:16:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:45.997374 | orchestrator | 2025-08-29 15:16:45 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:45.997450 | orchestrator | 2025-08-29 15:16:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:49.050431 | orchestrator | 2025-08-29 15:16:49 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:49.050563 | orchestrator | 2025-08-29 15:16:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:52.086222 | orchestrator | 2025-08-29 15:16:52 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:52.086339 | orchestrator | 2025-08-29 15:16:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:55.119926 | orchestrator | 2025-08-29 15:16:55 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:55.120101 | orchestrator | 2025-08-29 15:16:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:58.170665 | orchestrator | 2025-08-29 15:16:58 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:16:58.170767 | orchestrator | 2025-08-29 15:16:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:01.222480 | orchestrator | 2025-08-29 15:17:01 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:01.222607 | orchestrator | 2025-08-29 15:17:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:04.263587 | orchestrator | 2025-08-29 15:17:04 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:04.263713 | orchestrator | 2025-08-29 15:17:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:07.306155 | orchestrator | 2025-08-29 15:17:07 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:07.306245 | orchestrator | 2025-08-29 15:17:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:10.347389 | orchestrator | 2025-08-29 15:17:10 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:10.347502 | orchestrator | 2025-08-29 15:17:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:13.389735 | orchestrator | 2025-08-29 15:17:13 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:13.390197 | orchestrator | 2025-08-29 15:17:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:16.437872 | orchestrator | 2025-08-29 15:17:16 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:16.438109 | orchestrator | 2025-08-29 15:17:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:19.482371 | orchestrator | 2025-08-29 15:17:19 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:19.482497 | orchestrator | 2025-08-29 15:17:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:22.528346 | orchestrator | 2025-08-29 15:17:22 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:22.528445 | orchestrator | 2025-08-29 15:17:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:25.581222 | orchestrator | 2025-08-29 15:17:25 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:25.581334 | orchestrator | 2025-08-29 15:17:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:28.624420 | orchestrator | 2025-08-29 15:17:28 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:28.624554 | orchestrator | 2025-08-29 15:17:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:31.673218 | orchestrator | 2025-08-29 15:17:31 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:31.673332 | orchestrator | 2025-08-29 15:17:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:34.721608 | orchestrator | 2025-08-29 15:17:34 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:34.721721 | orchestrator | 2025-08-29 15:17:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:37.761598 | orchestrator | 2025-08-29 15:17:37 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:37.761712 | orchestrator | 2025-08-29 15:17:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:40.806821 | orchestrator | 2025-08-29 15:17:40 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:40.806938 | orchestrator | 2025-08-29 15:17:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:43.854211 | orchestrator | 2025-08-29 15:17:43 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:43.854283 | orchestrator | 2025-08-29 15:17:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:46.902619 | orchestrator | 2025-08-29 15:17:46 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:46.902727 | orchestrator | 2025-08-29 15:17:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:49.949018 | orchestrator | 2025-08-29 15:17:49 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:49.949184 | orchestrator | 2025-08-29 15:17:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:52.983890 | orchestrator | 2025-08-29 15:17:52 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:52.984022 | orchestrator | 2025-08-29 15:17:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:56.025500 | orchestrator | 2025-08-29 15:17:56 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:56.025609 | orchestrator | 2025-08-29 15:17:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:59.071602 | orchestrator | 2025-08-29 15:17:59 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:17:59.071723 | orchestrator | 2025-08-29 15:17:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:02.101964 | orchestrator | 2025-08-29 15:18:02 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:02.102111 | orchestrator | 2025-08-29 15:18:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:05.155913 | orchestrator | 2025-08-29 15:18:05 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:05.156005 | orchestrator | 2025-08-29 15:18:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:08.203761 | orchestrator | 2025-08-29 15:18:08 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:08.203841 | orchestrator | 2025-08-29 15:18:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:11.253315 | orchestrator | 2025-08-29 15:18:11 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:11.253441 | orchestrator | 2025-08-29 15:18:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:14.292469 | orchestrator | 2025-08-29 15:18:14 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:14.292578 | orchestrator | 2025-08-29 15:18:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:17.338714 | orchestrator | 2025-08-29 15:18:17 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:17.338807 | orchestrator | 2025-08-29 15:18:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:20.385842 | orchestrator | 2025-08-29 15:18:20 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:20.385938 | orchestrator | 2025-08-29 15:18:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:23.430919 | orchestrator | 2025-08-29 15:18:23 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:23.431031 | orchestrator | 2025-08-29 15:18:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:26.475824 | orchestrator | 2025-08-29 15:18:26 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state STARTED 2025-08-29 15:18:26.475951 | orchestrator | 2025-08-29 15:18:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:29.522318 | orchestrator | 2025-08-29 15:18:29.522437 | orchestrator | 2025-08-29 15:18:29.522453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:18:29.522466 | orchestrator | 2025-08-29 15:18:29.522478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:18:29.522490 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:00.419) 0:00:00.419 ********* 2025-08-29 15:18:29.522501 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.522513 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:18:29.522524 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:18:29.522535 | orchestrator | 2025-08-29 15:18:29.522546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:18:29.522557 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:00.305) 0:00:00.725 ********* 2025-08-29 15:18:29.522568 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 15:18:29.522579 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 15:18:29.522590 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 15:18:29.522601 | orchestrator | 2025-08-29 15:18:29.522612 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 15:18:29.522623 | orchestrator | 2025-08-29 15:18:29.522634 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:18:29.522645 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:00.437) 0:00:01.162 ********* 2025-08-29 15:18:29.522692 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:18:29.522705 | orchestrator | 2025-08-29 15:18:29.522716 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 15:18:29.522726 | orchestrator | Friday 29 August 2025 15:13:38 +0000 (0:00:01.166) 0:00:02.329 ********* 2025-08-29 15:18:29.522737 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 15:18:29.522748 | orchestrator | 2025-08-29 15:18:29.522759 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 15:18:29.522770 | orchestrator | Friday 29 August 2025 15:13:42 +0000 (0:00:03.980) 0:00:06.309 ********* 2025-08-29 15:18:29.522780 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 15:18:29.522792 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 15:18:29.522803 | orchestrator | 2025-08-29 15:18:29.522813 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 15:18:29.522824 | orchestrator | Friday 29 August 2025 15:13:49 +0000 (0:00:06.488) 0:00:12.797 ********* 2025-08-29 15:18:29.522835 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:18:29.522868 | orchestrator | 2025-08-29 15:18:29.522881 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 15:18:29.522902 | orchestrator | Friday 29 August 2025 15:13:52 +0000 (0:00:03.018) 0:00:15.816 ********* 2025-08-29 15:18:29.522913 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:18:29.522925 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:18:29.522936 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:18:29.522946 | orchestrator | 2025-08-29 15:18:29.522957 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 15:18:29.522968 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:08.848) 0:00:24.664 ********* 2025-08-29 15:18:29.522979 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:18:29.522990 | orchestrator | 2025-08-29 15:18:29.523000 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 15:18:29.523011 | orchestrator | Friday 29 August 2025 15:14:04 +0000 (0:00:03.325) 0:00:27.989 ********* 2025-08-29 15:18:29.523026 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:18:29.523038 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:18:29.523049 | orchestrator | 2025-08-29 15:18:29.523059 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 15:18:29.523070 | orchestrator | Friday 29 August 2025 15:14:11 +0000 (0:00:06.988) 0:00:34.978 ********* 2025-08-29 15:18:29.523081 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 15:18:29.523092 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 15:18:29.523103 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 15:18:29.523113 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 15:18:29.523124 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 15:18:29.523135 | orchestrator | 2025-08-29 15:18:29.523146 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:18:29.523157 | orchestrator | Friday 29 August 2025 15:14:27 +0000 (0:00:16.451) 0:00:51.432 ********* 2025-08-29 15:18:29.523202 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:18:29.523219 | orchestrator | 2025-08-29 15:18:29.523237 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 15:18:29.523249 | orchestrator | Friday 29 August 2025 15:14:28 +0000 (0:00:01.081) 0:00:52.514 ********* 2025-08-29 15:18:29.523260 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523278 | orchestrator | 2025-08-29 15:18:29.523289 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-08-29 15:18:29.523315 | orchestrator | Friday 29 August 2025 15:14:34 +0000 (0:00:05.316) 0:00:57.831 ********* 2025-08-29 15:18:29.523327 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523337 | orchestrator | 2025-08-29 15:18:29.523348 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 15:18:29.523377 | orchestrator | Friday 29 August 2025 15:14:38 +0000 (0:00:03.927) 0:01:01.758 ********* 2025-08-29 15:18:29.523388 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.523399 | orchestrator | 2025-08-29 15:18:29.523410 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-08-29 15:18:29.523421 | orchestrator | Friday 29 August 2025 15:14:41 +0000 (0:00:03.283) 0:01:05.041 ********* 2025-08-29 15:18:29.523432 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 15:18:29.523443 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 15:18:29.523454 | orchestrator | 2025-08-29 15:18:29.523465 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-08-29 15:18:29.523476 | orchestrator | Friday 29 August 2025 15:14:52 +0000 (0:00:11.248) 0:01:16.290 ********* 2025-08-29 15:18:29.523487 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-08-29 15:18:29.523499 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-08-29 15:18:29.523512 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-08-29 15:18:29.523524 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-08-29 15:18:29.523535 | orchestrator | 2025-08-29 15:18:29.523546 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-08-29 15:18:29.523557 | orchestrator | Friday 29 August 2025 15:15:09 +0000 (0:00:17.038) 0:01:33.329 ********* 2025-08-29 15:18:29.523568 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523579 | orchestrator | 2025-08-29 15:18:29.523590 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-08-29 15:18:29.523601 | orchestrator | Friday 29 August 2025 15:15:14 +0000 (0:00:04.709) 0:01:38.038 ********* 2025-08-29 15:18:29.523612 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523622 | orchestrator | 2025-08-29 15:18:29.523633 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-08-29 15:18:29.523644 | orchestrator | Friday 29 August 2025 15:15:20 +0000 (0:00:05.553) 0:01:43.591 ********* 2025-08-29 15:18:29.523655 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.523666 | orchestrator | 2025-08-29 15:18:29.523677 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-08-29 15:18:29.523688 | orchestrator | Friday 29 August 2025 15:15:20 +0000 (0:00:00.223) 0:01:43.815 ********* 2025-08-29 15:18:29.523698 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523709 | orchestrator | 2025-08-29 15:18:29.523720 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:18:29.523731 | orchestrator | Friday 29 August 2025 15:15:25 +0000 (0:00:04.904) 0:01:48.719 ********* 2025-08-29 15:18:29.523742 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:18:29.523753 | orchestrator | 2025-08-29 15:18:29.523764 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-08-29 15:18:29.523775 | orchestrator | Friday 29 August 2025 15:15:26 +0000 (0:00:01.092) 0:01:49.812 ********* 2025-08-29 15:18:29.523786 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.523796 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.523815 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523826 | orchestrator | 2025-08-29 15:18:29.523837 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-08-29 15:18:29.523848 | orchestrator | Friday 29 August 2025 15:15:31 +0000 (0:00:05.711) 0:01:55.523 ********* 2025-08-29 15:18:29.523859 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.523870 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.523881 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523892 | orchestrator | 2025-08-29 15:18:29.523903 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-08-29 15:18:29.523914 | orchestrator | Friday 29 August 2025 15:15:36 +0000 (0:00:04.752) 0:02:00.276 ********* 2025-08-29 15:18:29.523925 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.523936 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.523947 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.523958 | orchestrator | 2025-08-29 15:18:29.523969 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-08-29 15:18:29.523980 | orchestrator | Friday 29 August 2025 15:15:37 +0000 (0:00:00.792) 0:02:01.069 ********* 2025-08-29 15:18:29.523991 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:18:29.524002 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:18:29.524012 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524023 | orchestrator | 2025-08-29 15:18:29.524035 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-08-29 15:18:29.524045 | orchestrator | Friday 29 August 2025 15:15:39 +0000 (0:00:02.100) 0:02:03.169 ********* 2025-08-29 15:18:29.524056 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.524067 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.524079 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.524090 | orchestrator | 2025-08-29 15:18:29.524101 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-08-29 15:18:29.524112 | orchestrator | Friday 29 August 2025 15:15:40 +0000 (0:00:01.336) 0:02:04.505 ********* 2025-08-29 15:18:29.524123 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.524134 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.524145 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.524155 | orchestrator | 2025-08-29 15:18:29.524272 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-08-29 15:18:29.524287 | orchestrator | Friday 29 August 2025 15:15:42 +0000 (0:00:01.204) 0:02:05.710 ********* 2025-08-29 15:18:29.524298 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.524309 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.524320 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.524331 | orchestrator | 2025-08-29 15:18:29.524440 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-08-29 15:18:29.524455 | orchestrator | Friday 29 August 2025 15:15:44 +0000 (0:00:02.292) 0:02:08.003 ********* 2025-08-29 15:18:29.524466 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.524477 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.524488 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.524499 | orchestrator | 2025-08-29 15:18:29.524510 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-08-29 15:18:29.524520 | orchestrator | Friday 29 August 2025 15:15:46 +0000 (0:00:01.829) 0:02:09.832 ********* 2025-08-29 15:18:29.524531 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524542 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:18:29.524553 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:18:29.524564 | orchestrator | 2025-08-29 15:18:29.524575 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-08-29 15:18:29.524586 | orchestrator | Friday 29 August 2025 15:15:46 +0000 (0:00:00.639) 0:02:10.472 ********* 2025-08-29 15:18:29.524596 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:18:29.524608 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:18:29.524619 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524630 | orchestrator | 2025-08-29 15:18:29.524652 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:18:29.524663 | orchestrator | Friday 29 August 2025 15:15:49 +0000 (0:00:02.807) 0:02:13.279 ********* 2025-08-29 15:18:29.524674 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:18:29.524685 | orchestrator | 2025-08-29 15:18:29.524696 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-08-29 15:18:29.524707 | orchestrator | Friday 29 August 2025 15:15:50 +0000 (0:00:00.707) 0:02:13.987 ********* 2025-08-29 15:18:29.524717 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524728 | orchestrator | 2025-08-29 15:18:29.524739 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 15:18:29.524749 | orchestrator | Friday 29 August 2025 15:15:54 +0000 (0:00:03.919) 0:02:17.907 ********* 2025-08-29 15:18:29.524758 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524768 | orchestrator | 2025-08-29 15:18:29.524778 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-08-29 15:18:29.524787 | orchestrator | Friday 29 August 2025 15:15:57 +0000 (0:00:03.188) 0:02:21.095 ********* 2025-08-29 15:18:29.524797 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 15:18:29.524807 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 15:18:29.524817 | orchestrator | 2025-08-29 15:18:29.524826 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-08-29 15:18:29.524836 | orchestrator | Friday 29 August 2025 15:16:04 +0000 (0:00:07.326) 0:02:28.422 ********* 2025-08-29 15:18:29.524845 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524855 | orchestrator | 2025-08-29 15:18:29.524864 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-08-29 15:18:29.524875 | orchestrator | Friday 29 August 2025 15:16:09 +0000 (0:00:04.569) 0:02:32.991 ********* 2025-08-29 15:18:29.524884 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:18:29.524894 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:18:29.524903 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:18:29.524913 | orchestrator | 2025-08-29 15:18:29.524923 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-08-29 15:18:29.524933 | orchestrator | Friday 29 August 2025 15:16:09 +0000 (0:00:00.311) 0:02:33.302 ********* 2025-08-29 15:18:29.524947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.524998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.525019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.525031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.525044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.525054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.525065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525265 | orchestrator | 2025-08-29 15:18:29.525275 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-08-29 15:18:29.525285 | orchestrator | Friday 29 August 2025 15:16:12 +0000 (0:00:02.596) 0:02:35.899 ********* 2025-08-29 15:18:29.525295 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.525305 | orchestrator | 2025-08-29 15:18:29.525314 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-08-29 15:18:29.525324 | orchestrator | Friday 29 August 2025 15:16:12 +0000 (0:00:00.163) 0:02:36.062 ********* 2025-08-29 15:18:29.525334 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.525344 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:18:29.525353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:18:29.525363 | orchestrator | 2025-08-29 15:18:29.525373 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-08-29 15:18:29.525382 | orchestrator | Friday 29 August 2025 15:16:13 +0000 (0:00:00.507) 0:02:36.569 ********* 2025-08-29 15:18:29.525393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.525404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.525414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.525425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.525446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.525456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.525493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.525505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.525515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.525525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.525535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.525558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:18:29.525642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.525656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.525667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.525677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.525687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.525697 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:18:29.525707 | orchestrator | 2025-08-29 15:18:29.525716 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:18:29.525733 | orchestrator | Friday 29 August 2025 15:16:13 +0000 (0:00:00.686) 0:02:37.256 ********* 2025-08-29 15:18:29.525743 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:18:29.525753 | orchestrator | 2025-08-29 15:18:29.525762 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-08-29 15:18:29.525772 | orchestrator | Friday 29 August 2025 15:16:14 +0000 (0:00:00.544) 0:02:37.801 ********* 2025-08-29 15:18:29.525787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.525824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.525836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.525846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.525857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.525873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.525888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.525996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526006 | orchestrator | 2025-08-29 15:18:29.526070 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-08-29 15:18:29.526081 | orchestrator | Friday 29 August 2025 15:16:19 +0000 (0:00:05.351) 0:02:43.152 ********* 2025-08-29 15:18:29.526092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.526102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.526119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.526181 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.526192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.526202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.526212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.526249 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:18:29.526272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.526283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.526294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.526330 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:18:29.526340 | orchestrator | 2025-08-29 15:18:29.526349 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-08-29 15:18:29.526359 | orchestrator | Friday 29 August 2025 15:16:20 +0000 (0:00:00.738) 0:02:43.890 ********* 2025-08-29 15:18:29.526374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.526390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.526401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.526440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.526450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.526461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.526487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.526525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:18:29.526535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:18:29.526545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:18:29.526556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:18:29.526590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:18:29.526606 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:18:29.526616 | orchestrator | 2025-08-29 15:18:29.526626 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-08-29 15:18:29.526636 | orchestrator | Friday 29 August 2025 15:16:21 +0000 (0:00:00.927) 0:02:44.818 ********* 2025-08-29 15:18:29.526646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.526656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.526671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.526689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'regist2025-08-29 15:18:29 | INFO  | Task d9a5fcb5-65ef-437f-84fa-113c7136c069 is in state SUCCESS 2025-08-29 15:18:29.526700 | orchestrator | 2025-08-29 15:18:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:29.526711 | orchestrator | ry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.526722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.526739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.526749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.526857 | orchestrator | 2025-08-29 15:18:29.526867 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-08-29 15:18:29.526877 | orchestrator | Friday 29 August 2025 15:16:26 +0000 (0:00:05.698) 0:02:50.516 ********* 2025-08-29 15:18:29.526887 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:18:29.526896 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:18:29.526906 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:18:29.526916 | orchestrator | 2025-08-29 15:18:29.526926 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-08-29 15:18:29.526936 | orchestrator | Friday 29 August 2025 15:16:28 +0000 (0:00:01.779) 0:02:52.295 ********* 2025-08-29 15:18:29.526957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.526975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.526985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.526996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.527006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.527021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.527042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527149 | orchestrator | 2025-08-29 15:18:29.527158 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-08-29 15:18:29.527195 | orchestrator | Friday 29 August 2025 15:16:45 +0000 (0:00:16.625) 0:03:08.921 ********* 2025-08-29 15:18:29.527205 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.527215 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.527225 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.527234 | orchestrator | 2025-08-29 15:18:29.527244 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-08-29 15:18:29.527254 | orchestrator | Friday 29 August 2025 15:16:46 +0000 (0:00:01.561) 0:03:10.482 ********* 2025-08-29 15:18:29.527263 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527273 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527282 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527292 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527301 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527311 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527320 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527330 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527339 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527349 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527358 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527368 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527377 | orchestrator | 2025-08-29 15:18:29.527387 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-08-29 15:18:29.527396 | orchestrator | Friday 29 August 2025 15:16:52 +0000 (0:00:05.402) 0:03:15.885 ********* 2025-08-29 15:18:29.527406 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527415 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527431 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527440 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527450 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527459 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527469 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527478 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527488 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527497 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527511 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527527 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527544 | orchestrator | 2025-08-29 15:18:29.527565 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-08-29 15:18:29.527590 | orchestrator | Friday 29 August 2025 15:16:57 +0000 (0:00:05.484) 0:03:21.370 ********* 2025-08-29 15:18:29.527606 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527635 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527651 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:18:29.527666 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527693 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527710 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:18:29.527726 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527742 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527757 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:18:29.527773 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527791 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527807 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:18:29.527822 | orchestrator | 2025-08-29 15:18:29.527832 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-08-29 15:18:29.527841 | orchestrator | Friday 29 August 2025 15:17:02 +0000 (0:00:05.080) 0:03:26.451 ********* 2025-08-29 15:18:29.527852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.527863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.527883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:18:29.527899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.527917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.527928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:18:29.527938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.527994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.528004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:18:29.528015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.528025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.528072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:18:29.528083 | orchestrator | 2025-08-29 15:18:29.528093 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:18:29.528103 | orchestrator | Friday 29 August 2025 15:17:06 +0000 (0:00:03.798) 0:03:30.249 ********* 2025-08-29 15:18:29.528113 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:18:29.528123 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:18:29.528132 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:18:29.528142 | orchestrator | 2025-08-29 15:18:29.528151 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-08-29 15:18:29.528185 | orchestrator | Friday 29 August 2025 15:17:07 +0000 (0:00:00.315) 0:03:30.565 ********* 2025-08-29 15:18:29.528203 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528217 | orchestrator | 2025-08-29 15:18:29.528227 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-08-29 15:18:29.528236 | orchestrator | Friday 29 August 2025 15:17:09 +0000 (0:00:02.152) 0:03:32.717 ********* 2025-08-29 15:18:29.528245 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528255 | orchestrator | 2025-08-29 15:18:29.528264 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-08-29 15:18:29.528281 | orchestrator | Friday 29 August 2025 15:17:11 +0000 (0:00:02.080) 0:03:34.798 ********* 2025-08-29 15:18:29.528297 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528312 | orchestrator | 2025-08-29 15:18:29.528328 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-08-29 15:18:29.528343 | orchestrator | Friday 29 August 2025 15:17:13 +0000 (0:00:02.642) 0:03:37.440 ********* 2025-08-29 15:18:29.528356 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528372 | orchestrator | 2025-08-29 15:18:29.528387 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-08-29 15:18:29.528402 | orchestrator | Friday 29 August 2025 15:17:16 +0000 (0:00:02.142) 0:03:39.583 ********* 2025-08-29 15:18:29.528415 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528429 | orchestrator | 2025-08-29 15:18:29.528442 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:18:29.528465 | orchestrator | Friday 29 August 2025 15:17:37 +0000 (0:00:21.141) 0:04:00.724 ********* 2025-08-29 15:18:29.528482 | orchestrator | 2025-08-29 15:18:29.528496 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:18:29.528511 | orchestrator | Friday 29 August 2025 15:17:37 +0000 (0:00:00.068) 0:04:00.793 ********* 2025-08-29 15:18:29.528526 | orchestrator | 2025-08-29 15:18:29.528552 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:18:29.528569 | orchestrator | Friday 29 August 2025 15:17:37 +0000 (0:00:00.069) 0:04:00.863 ********* 2025-08-29 15:18:29.528585 | orchestrator | 2025-08-29 15:18:29.528601 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-08-29 15:18:29.528616 | orchestrator | Friday 29 August 2025 15:17:37 +0000 (0:00:00.062) 0:04:00.925 ********* 2025-08-29 15:18:29.528630 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528643 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.528657 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.528669 | orchestrator | 2025-08-29 15:18:29.528682 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-08-29 15:18:29.528710 | orchestrator | Friday 29 August 2025 15:17:54 +0000 (0:00:17.382) 0:04:18.307 ********* 2025-08-29 15:18:29.528724 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528739 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.528756 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.528772 | orchestrator | 2025-08-29 15:18:29.528788 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-08-29 15:18:29.528804 | orchestrator | Friday 29 August 2025 15:18:01 +0000 (0:00:06.946) 0:04:25.254 ********* 2025-08-29 15:18:29.528821 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528836 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.528853 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.528863 | orchestrator | 2025-08-29 15:18:29.528873 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-08-29 15:18:29.528882 | orchestrator | Friday 29 August 2025 15:18:07 +0000 (0:00:05.720) 0:04:30.974 ********* 2025-08-29 15:18:29.528892 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.528901 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.528911 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528920 | orchestrator | 2025-08-29 15:18:29.528930 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-08-29 15:18:29.528939 | orchestrator | Friday 29 August 2025 15:18:15 +0000 (0:00:08.143) 0:04:39.117 ********* 2025-08-29 15:18:29.528949 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:18:29.528958 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:18:29.528968 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:18:29.528977 | orchestrator | 2025-08-29 15:18:29.528987 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:18:29.528997 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:18:29.529008 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:18:29.529018 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:18:29.529027 | orchestrator | 2025-08-29 15:18:29.529037 | orchestrator | 2025-08-29 15:18:29.529046 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:18:29.529056 | orchestrator | Friday 29 August 2025 15:18:26 +0000 (0:00:10.700) 0:04:49.817 ********* 2025-08-29 15:18:29.529065 | orchestrator | =============================================================================== 2025-08-29 15:18:29.529075 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.14s 2025-08-29 15:18:29.529084 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.38s 2025-08-29 15:18:29.529094 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.04s 2025-08-29 15:18:29.529103 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.63s 2025-08-29 15:18:29.529113 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.45s 2025-08-29 15:18:29.529122 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.25s 2025-08-29 15:18:29.529132 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.70s 2025-08-29 15:18:29.529141 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.85s 2025-08-29 15:18:29.529151 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.14s 2025-08-29 15:18:29.529213 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.33s 2025-08-29 15:18:29.529225 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.99s 2025-08-29 15:18:29.529235 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.95s 2025-08-29 15:18:29.529252 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.49s 2025-08-29 15:18:29.529262 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.72s 2025-08-29 15:18:29.529271 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.71s 2025-08-29 15:18:29.529281 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.70s 2025-08-29 15:18:29.529290 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.55s 2025-08-29 15:18:29.529300 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.48s 2025-08-29 15:18:29.529309 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.40s 2025-08-29 15:18:29.529326 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.35s 2025-08-29 15:18:32.563056 | orchestrator | 2025-08-29 15:18:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:35.605914 | orchestrator | 2025-08-29 15:18:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:38.651965 | orchestrator | 2025-08-29 15:18:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:41.703543 | orchestrator | 2025-08-29 15:18:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:44.744159 | orchestrator | 2025-08-29 15:18:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:47.790285 | orchestrator | 2025-08-29 15:18:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:50.836770 | orchestrator | 2025-08-29 15:18:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:53.876303 | orchestrator | 2025-08-29 15:18:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:56.920381 | orchestrator | 2025-08-29 15:18:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:18:59.968182 | orchestrator | 2025-08-29 15:18:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:03.017105 | orchestrator | 2025-08-29 15:19:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:06.056505 | orchestrator | 2025-08-29 15:19:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:09.086717 | orchestrator | 2025-08-29 15:19:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:12.130912 | orchestrator | 2025-08-29 15:19:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:15.172656 | orchestrator | 2025-08-29 15:19:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:18.216610 | orchestrator | 2025-08-29 15:19:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:21.263407 | orchestrator | 2025-08-29 15:19:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:24.305162 | orchestrator | 2025-08-29 15:19:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:27.356845 | orchestrator | 2025-08-29 15:19:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:30.401378 | orchestrator | 2025-08-29 15:19:30.720698 | orchestrator | 2025-08-29 15:19:30.726331 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 15:19:30 UTC 2025 2025-08-29 15:19:30.726448 | orchestrator | 2025-08-29 15:19:31.070904 | orchestrator | ok: Runtime: 0:34:56.153720 2025-08-29 15:19:31.326388 | 2025-08-29 15:19:31.326594 | TASK [Bootstrap services] 2025-08-29 15:19:32.146401 | orchestrator | 2025-08-29 15:19:32.146638 | orchestrator | # BOOTSTRAP 2025-08-29 15:19:32.146686 | orchestrator | 2025-08-29 15:19:32.146711 | orchestrator | + set -e 2025-08-29 15:19:32.146734 | orchestrator | + echo 2025-08-29 15:19:32.146757 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 15:19:32.146784 | orchestrator | + echo 2025-08-29 15:19:32.146830 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 15:19:32.157297 | orchestrator | + set -e 2025-08-29 15:19:32.157542 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 15:19:36.840378 | orchestrator | 2025-08-29 15:19:36 | INFO  | It takes a moment until task 89cdd460-aea6-4084-8bd0-ac9bfd6e8c2a (flavor-manager) has been started and output is visible here. 2025-08-29 15:19:44.892034 | orchestrator | 2025-08-29 15:19:40 | INFO  | Flavor SCS-1V-4 created 2025-08-29 15:19:44.892285 | orchestrator | 2025-08-29 15:19:40 | INFO  | Flavor SCS-2V-8 created 2025-08-29 15:19:44.892319 | orchestrator | 2025-08-29 15:19:41 | INFO  | Flavor SCS-4V-16 created 2025-08-29 15:19:44.892384 | orchestrator | 2025-08-29 15:19:41 | INFO  | Flavor SCS-8V-32 created 2025-08-29 15:19:44.892406 | orchestrator | 2025-08-29 15:19:41 | INFO  | Flavor SCS-1V-2 created 2025-08-29 15:19:44.892425 | orchestrator | 2025-08-29 15:19:41 | INFO  | Flavor SCS-2V-4 created 2025-08-29 15:19:44.892439 | orchestrator | 2025-08-29 15:19:41 | INFO  | Flavor SCS-4V-8 created 2025-08-29 15:19:44.892451 | orchestrator | 2025-08-29 15:19:41 | INFO  | Flavor SCS-8V-16 created 2025-08-29 15:19:44.892476 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-16V-32 created 2025-08-29 15:19:44.892487 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-1V-8 created 2025-08-29 15:19:44.892498 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-2V-16 created 2025-08-29 15:19:44.892509 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-4V-32 created 2025-08-29 15:19:44.892520 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-1L-1 created 2025-08-29 15:19:44.892531 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-2V-4-20s created 2025-08-29 15:19:44.892542 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-4V-16-100s created 2025-08-29 15:19:44.892553 | orchestrator | 2025-08-29 15:19:42 | INFO  | Flavor SCS-1V-4-10 created 2025-08-29 15:19:44.892564 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-2V-8-20 created 2025-08-29 15:19:44.892575 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-4V-16-50 created 2025-08-29 15:19:44.892586 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-8V-32-100 created 2025-08-29 15:19:44.892597 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-1V-2-5 created 2025-08-29 15:19:44.892607 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-2V-4-10 created 2025-08-29 15:19:44.892618 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-4V-8-20 created 2025-08-29 15:19:44.892630 | orchestrator | 2025-08-29 15:19:43 | INFO  | Flavor SCS-8V-16-50 created 2025-08-29 15:19:44.892641 | orchestrator | 2025-08-29 15:19:44 | INFO  | Flavor SCS-16V-32-100 created 2025-08-29 15:19:44.892652 | orchestrator | 2025-08-29 15:19:44 | INFO  | Flavor SCS-1V-8-20 created 2025-08-29 15:19:44.892663 | orchestrator | 2025-08-29 15:19:44 | INFO  | Flavor SCS-2V-16-50 created 2025-08-29 15:19:44.892674 | orchestrator | 2025-08-29 15:19:44 | INFO  | Flavor SCS-4V-32-100 created 2025-08-29 15:19:44.892685 | orchestrator | 2025-08-29 15:19:44 | INFO  | Flavor SCS-1L-1-5 created 2025-08-29 15:19:47.051421 | orchestrator | 2025-08-29 15:19:47 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-08-29 15:19:57.299453 | orchestrator | 2025-08-29 15:19:57 | INFO  | Task 3c78ad59-e287-4a30-ab80-08f72317d4b3 (bootstrap-basic) was prepared for execution. 2025-08-29 15:19:57.299622 | orchestrator | 2025-08-29 15:19:57 | INFO  | It takes a moment until task 3c78ad59-e287-4a30-ab80-08f72317d4b3 (bootstrap-basic) has been started and output is visible here. 2025-08-29 15:20:58.538049 | orchestrator | 2025-08-29 15:20:58.538124 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-08-29 15:20:58.538132 | orchestrator | 2025-08-29 15:20:58.538137 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 15:20:58.538143 | orchestrator | Friday 29 August 2025 15:20:01 +0000 (0:00:00.079) 0:00:00.079 ********* 2025-08-29 15:20:58.538148 | orchestrator | ok: [localhost] 2025-08-29 15:20:58.538154 | orchestrator | 2025-08-29 15:20:58.538159 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-08-29 15:20:58.538165 | orchestrator | Friday 29 August 2025 15:20:03 +0000 (0:00:01.913) 0:00:01.992 ********* 2025-08-29 15:20:58.538170 | orchestrator | ok: [localhost] 2025-08-29 15:20:58.538175 | orchestrator | 2025-08-29 15:20:58.538180 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-08-29 15:20:58.538185 | orchestrator | Friday 29 August 2025 15:20:12 +0000 (0:00:08.621) 0:00:10.614 ********* 2025-08-29 15:20:58.538190 | orchestrator | changed: [localhost] 2025-08-29 15:20:58.538195 | orchestrator | 2025-08-29 15:20:58.538200 | orchestrator | TASK [Get volume type local] *************************************************** 2025-08-29 15:20:58.538204 | orchestrator | Friday 29 August 2025 15:20:19 +0000 (0:00:07.823) 0:00:18.438 ********* 2025-08-29 15:20:58.538209 | orchestrator | ok: [localhost] 2025-08-29 15:20:58.538214 | orchestrator | 2025-08-29 15:20:58.538219 | orchestrator | TASK [Create volume type local] ************************************************ 2025-08-29 15:20:58.538223 | orchestrator | Friday 29 August 2025 15:20:27 +0000 (0:00:07.192) 0:00:25.630 ********* 2025-08-29 15:20:58.538228 | orchestrator | changed: [localhost] 2025-08-29 15:20:58.538234 | orchestrator | 2025-08-29 15:20:58.538239 | orchestrator | TASK [Create public network] *************************************************** 2025-08-29 15:20:58.538244 | orchestrator | Friday 29 August 2025 15:20:34 +0000 (0:00:07.299) 0:00:32.930 ********* 2025-08-29 15:20:58.538248 | orchestrator | changed: [localhost] 2025-08-29 15:20:58.538253 | orchestrator | 2025-08-29 15:20:58.538257 | orchestrator | TASK [Set public network to default] ******************************************* 2025-08-29 15:20:58.538262 | orchestrator | Friday 29 August 2025 15:20:39 +0000 (0:00:05.317) 0:00:38.248 ********* 2025-08-29 15:20:58.538266 | orchestrator | changed: [localhost] 2025-08-29 15:20:58.538271 | orchestrator | 2025-08-29 15:20:58.538286 | orchestrator | TASK [Create public subnet] **************************************************** 2025-08-29 15:20:58.538291 | orchestrator | Friday 29 August 2025 15:20:46 +0000 (0:00:06.525) 0:00:44.774 ********* 2025-08-29 15:20:58.538296 | orchestrator | changed: [localhost] 2025-08-29 15:20:58.538300 | orchestrator | 2025-08-29 15:20:58.538305 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-08-29 15:20:58.538309 | orchestrator | Friday 29 August 2025 15:20:50 +0000 (0:00:04.462) 0:00:49.237 ********* 2025-08-29 15:20:58.538314 | orchestrator | changed: [localhost] 2025-08-29 15:20:58.538318 | orchestrator | 2025-08-29 15:20:58.538323 | orchestrator | TASK [Create manager role] ***************************************************** 2025-08-29 15:20:58.538327 | orchestrator | Friday 29 August 2025 15:20:54 +0000 (0:00:03.879) 0:00:53.116 ********* 2025-08-29 15:20:58.538332 | orchestrator | ok: [localhost] 2025-08-29 15:20:58.538336 | orchestrator | 2025-08-29 15:20:58.538341 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:20:58.538346 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:20:58.538351 | orchestrator | 2025-08-29 15:20:58.538355 | orchestrator | 2025-08-29 15:20:58.538360 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:20:58.538365 | orchestrator | Friday 29 August 2025 15:20:58 +0000 (0:00:03.680) 0:00:56.796 ********* 2025-08-29 15:20:58.538385 | orchestrator | =============================================================================== 2025-08-29 15:20:58.538389 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.62s 2025-08-29 15:20:58.538394 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.82s 2025-08-29 15:20:58.538399 | orchestrator | Create volume type local ------------------------------------------------ 7.30s 2025-08-29 15:20:58.538407 | orchestrator | Get volume type local --------------------------------------------------- 7.19s 2025-08-29 15:20:58.538414 | orchestrator | Set public network to default ------------------------------------------- 6.53s 2025-08-29 15:20:58.538423 | orchestrator | Create public network --------------------------------------------------- 5.32s 2025-08-29 15:20:58.538430 | orchestrator | Create public subnet ---------------------------------------------------- 4.46s 2025-08-29 15:20:58.538437 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.88s 2025-08-29 15:20:58.538445 | orchestrator | Create manager role ----------------------------------------------------- 3.68s 2025-08-29 15:20:58.538452 | orchestrator | Gathering Facts --------------------------------------------------------- 1.91s 2025-08-29 15:21:00.807125 | orchestrator | 2025-08-29 15:21:00 | INFO  | It takes a moment until task c9882291-2b53-44e0-be4d-0b29005949a6 (image-manager) has been started and output is visible here. 2025-08-29 15:21:42.094501 | orchestrator | 2025-08-29 15:21:04 | INFO  | Processing image 'Cirros 0.6.2' 2025-08-29 15:21:42.094700 | orchestrator | 2025-08-29 15:21:04 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-08-29 15:21:42.094723 | orchestrator | 2025-08-29 15:21:04 | INFO  | Importing image Cirros 0.6.2 2025-08-29 15:21:42.094736 | orchestrator | 2025-08-29 15:21:04 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 15:21:42.094748 | orchestrator | 2025-08-29 15:21:06 | INFO  | Waiting for image to leave queued state... 2025-08-29 15:21:42.094760 | orchestrator | 2025-08-29 15:21:08 | INFO  | Waiting for import to complete... 2025-08-29 15:21:42.094772 | orchestrator | 2025-08-29 15:21:18 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-08-29 15:21:42.094783 | orchestrator | 2025-08-29 15:21:19 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-08-29 15:21:42.094793 | orchestrator | 2025-08-29 15:21:19 | INFO  | Setting internal_version = 0.6.2 2025-08-29 15:21:42.094804 | orchestrator | 2025-08-29 15:21:19 | INFO  | Setting image_original_user = cirros 2025-08-29 15:21:42.094816 | orchestrator | 2025-08-29 15:21:19 | INFO  | Adding tag os:cirros 2025-08-29 15:21:42.094827 | orchestrator | 2025-08-29 15:21:19 | INFO  | Setting property architecture: x86_64 2025-08-29 15:21:42.094838 | orchestrator | 2025-08-29 15:21:19 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 15:21:42.094848 | orchestrator | 2025-08-29 15:21:19 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 15:21:42.094859 | orchestrator | 2025-08-29 15:21:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 15:21:42.094870 | orchestrator | 2025-08-29 15:21:20 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 15:21:42.094881 | orchestrator | 2025-08-29 15:21:20 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 15:21:42.094892 | orchestrator | 2025-08-29 15:21:20 | INFO  | Setting property os_distro: cirros 2025-08-29 15:21:42.094903 | orchestrator | 2025-08-29 15:21:20 | INFO  | Setting property replace_frequency: never 2025-08-29 15:21:42.094914 | orchestrator | 2025-08-29 15:21:21 | INFO  | Setting property uuid_validity: none 2025-08-29 15:21:42.094925 | orchestrator | 2025-08-29 15:21:21 | INFO  | Setting property provided_until: none 2025-08-29 15:21:42.094956 | orchestrator | 2025-08-29 15:21:21 | INFO  | Setting property image_description: Cirros 2025-08-29 15:21:42.094976 | orchestrator | 2025-08-29 15:21:21 | INFO  | Setting property image_name: Cirros 2025-08-29 15:21:42.094987 | orchestrator | 2025-08-29 15:21:21 | INFO  | Setting property internal_version: 0.6.2 2025-08-29 15:21:42.095003 | orchestrator | 2025-08-29 15:21:22 | INFO  | Setting property image_original_user: cirros 2025-08-29 15:21:42.095043 | orchestrator | 2025-08-29 15:21:22 | INFO  | Setting property os_version: 0.6.2 2025-08-29 15:21:42.095057 | orchestrator | 2025-08-29 15:21:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 15:21:42.095071 | orchestrator | 2025-08-29 15:21:22 | INFO  | Setting property image_build_date: 2023-05-30 2025-08-29 15:21:42.095084 | orchestrator | 2025-08-29 15:21:22 | INFO  | Checking status of 'Cirros 0.6.2' 2025-08-29 15:21:42.095096 | orchestrator | 2025-08-29 15:21:22 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-08-29 15:21:42.095108 | orchestrator | 2025-08-29 15:21:22 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-08-29 15:21:42.095121 | orchestrator | 2025-08-29 15:21:23 | INFO  | Processing image 'Cirros 0.6.3' 2025-08-29 15:21:42.095133 | orchestrator | 2025-08-29 15:21:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-08-29 15:21:42.095145 | orchestrator | 2025-08-29 15:21:23 | INFO  | Importing image Cirros 0.6.3 2025-08-29 15:21:42.095157 | orchestrator | 2025-08-29 15:21:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 15:21:42.095170 | orchestrator | 2025-08-29 15:21:24 | INFO  | Waiting for image to leave queued state... 2025-08-29 15:21:42.095182 | orchestrator | 2025-08-29 15:21:26 | INFO  | Waiting for import to complete... 2025-08-29 15:21:42.095194 | orchestrator | 2025-08-29 15:21:37 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-08-29 15:21:42.095224 | orchestrator | 2025-08-29 15:21:37 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-08-29 15:21:42.095238 | orchestrator | 2025-08-29 15:21:37 | INFO  | Setting internal_version = 0.6.3 2025-08-29 15:21:42.095250 | orchestrator | 2025-08-29 15:21:37 | INFO  | Setting image_original_user = cirros 2025-08-29 15:21:42.095263 | orchestrator | 2025-08-29 15:21:37 | INFO  | Adding tag os:cirros 2025-08-29 15:21:42.095275 | orchestrator | 2025-08-29 15:21:37 | INFO  | Setting property architecture: x86_64 2025-08-29 15:21:42.095287 | orchestrator | 2025-08-29 15:21:37 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 15:21:42.095299 | orchestrator | 2025-08-29 15:21:38 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 15:21:42.095312 | orchestrator | 2025-08-29 15:21:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 15:21:42.095324 | orchestrator | 2025-08-29 15:21:38 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 15:21:42.095337 | orchestrator | 2025-08-29 15:21:38 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 15:21:42.095350 | orchestrator | 2025-08-29 15:21:39 | INFO  | Setting property os_distro: cirros 2025-08-29 15:21:42.095360 | orchestrator | 2025-08-29 15:21:39 | INFO  | Setting property replace_frequency: never 2025-08-29 15:21:42.095371 | orchestrator | 2025-08-29 15:21:39 | INFO  | Setting property uuid_validity: none 2025-08-29 15:21:42.095391 | orchestrator | 2025-08-29 15:21:39 | INFO  | Setting property provided_until: none 2025-08-29 15:21:42.095402 | orchestrator | 2025-08-29 15:21:39 | INFO  | Setting property image_description: Cirros 2025-08-29 15:21:42.095412 | orchestrator | 2025-08-29 15:21:40 | INFO  | Setting property image_name: Cirros 2025-08-29 15:21:42.095423 | orchestrator | 2025-08-29 15:21:40 | INFO  | Setting property internal_version: 0.6.3 2025-08-29 15:21:42.095434 | orchestrator | 2025-08-29 15:21:40 | INFO  | Setting property image_original_user: cirros 2025-08-29 15:21:42.095445 | orchestrator | 2025-08-29 15:21:40 | INFO  | Setting property os_version: 0.6.3 2025-08-29 15:21:42.095456 | orchestrator | 2025-08-29 15:21:40 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 15:21:42.095466 | orchestrator | 2025-08-29 15:21:41 | INFO  | Setting property image_build_date: 2024-09-26 2025-08-29 15:21:42.095477 | orchestrator | 2025-08-29 15:21:41 | INFO  | Checking status of 'Cirros 0.6.3' 2025-08-29 15:21:42.095488 | orchestrator | 2025-08-29 15:21:41 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-08-29 15:21:42.095504 | orchestrator | 2025-08-29 15:21:41 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-08-29 15:21:42.405724 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-08-29 15:21:44.484474 | orchestrator | 2025-08-29 15:21:44 | INFO  | date: 2025-08-29 2025-08-29 15:21:44.484629 | orchestrator | 2025-08-29 15:21:44 | INFO  | image: octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:21:44.484649 | orchestrator | 2025-08-29 15:21:44 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:21:44.484684 | orchestrator | 2025-08-29 15:21:44 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2.CHECKSUM 2025-08-29 15:21:44.531274 | orchestrator | 2025-08-29 15:21:44 | INFO  | checksum: 9bd11944634778935b43eb626302bc74d657e4c319fdb6fd625fdfeb36ffc69d 2025-08-29 15:21:44.603652 | orchestrator | 2025-08-29 15:21:44 | INFO  | It takes a moment until task fffff86a-3d70-4854-bffc-57c9d16ecb22 (image-manager) has been started and output is visible here. 2025-08-29 15:22:43.798508 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-29 15:22:43.798602 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-08-29 15:22:43.798618 | orchestrator | 2025-08-29 15:21:46 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:22:43.798634 | orchestrator | 2025-08-29 15:21:46 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2: 200 2025-08-29 15:22:43.798647 | orchestrator | 2025-08-29 15:21:46 | INFO  | Importing image OpenStack Octavia Amphora 2025-08-29 2025-08-29 15:22:43.798658 | orchestrator | 2025-08-29 15:21:46 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:22:43.798704 | orchestrator | 2025-08-29 15:21:46 | INFO  | Waiting for image to leave queued state... 2025-08-29 15:22:43.798734 | orchestrator | 2025-08-29 15:21:48 | INFO  | Waiting for import to complete... 2025-08-29 15:22:43.798745 | orchestrator | 2025-08-29 15:21:59 | INFO  | Waiting for import to complete... 2025-08-29 15:22:43.798755 | orchestrator | 2025-08-29 15:22:09 | INFO  | Waiting for import to complete... 2025-08-29 15:22:43.798765 | orchestrator | 2025-08-29 15:22:19 | INFO  | Waiting for import to complete... 2025-08-29 15:22:43.798774 | orchestrator | 2025-08-29 15:22:29 | INFO  | Waiting for import to complete... 2025-08-29 15:22:43.798784 | orchestrator | 2025-08-29 15:22:39 | INFO  | Import of 'OpenStack Octavia Amphora 2025-08-29' successfully completed, reloading images 2025-08-29 15:22:43.798795 | orchestrator | 2025-08-29 15:22:39 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:22:43.798805 | orchestrator | 2025-08-29 15:22:39 | INFO  | Setting internal_version = 2025-08-29 2025-08-29 15:22:43.798814 | orchestrator | 2025-08-29 15:22:39 | INFO  | Setting image_original_user = ubuntu 2025-08-29 15:22:43.798823 | orchestrator | 2025-08-29 15:22:39 | INFO  | Adding tag amphora 2025-08-29 15:22:43.798833 | orchestrator | 2025-08-29 15:22:39 | INFO  | Adding tag os:ubuntu 2025-08-29 15:22:43.798843 | orchestrator | 2025-08-29 15:22:40 | INFO  | Setting property architecture: x86_64 2025-08-29 15:22:43.798852 | orchestrator | 2025-08-29 15:22:40 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 15:22:43.798862 | orchestrator | 2025-08-29 15:22:40 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 15:22:43.798878 | orchestrator | 2025-08-29 15:22:40 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 15:22:43.798888 | orchestrator | 2025-08-29 15:22:40 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 15:22:43.798898 | orchestrator | 2025-08-29 15:22:41 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 15:22:43.798907 | orchestrator | 2025-08-29 15:22:41 | INFO  | Setting property os_distro: ubuntu 2025-08-29 15:22:43.798917 | orchestrator | 2025-08-29 15:22:41 | INFO  | Setting property replace_frequency: quarterly 2025-08-29 15:22:43.798926 | orchestrator | 2025-08-29 15:22:41 | INFO  | Setting property uuid_validity: last-1 2025-08-29 15:22:43.798936 | orchestrator | 2025-08-29 15:22:41 | INFO  | Setting property provided_until: none 2025-08-29 15:22:43.798946 | orchestrator | 2025-08-29 15:22:42 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-08-29 15:22:43.798955 | orchestrator | 2025-08-29 15:22:42 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-08-29 15:22:43.798965 | orchestrator | 2025-08-29 15:22:42 | INFO  | Setting property internal_version: 2025-08-29 2025-08-29 15:22:43.798975 | orchestrator | 2025-08-29 15:22:42 | INFO  | Setting property image_original_user: ubuntu 2025-08-29 15:22:43.798984 | orchestrator | 2025-08-29 15:22:42 | INFO  | Setting property os_version: 2025-08-29 2025-08-29 15:22:43.798994 | orchestrator | 2025-08-29 15:22:43 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 15:22:43.799019 | orchestrator | 2025-08-29 15:22:43 | INFO  | Setting property image_build_date: 2025-08-29 2025-08-29 15:22:43.799030 | orchestrator | 2025-08-29 15:22:43 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:22:43.799039 | orchestrator | 2025-08-29 15:22:43 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 15:22:43.799056 | orchestrator | 2025-08-29 15:22:43 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-08-29 15:22:43.799068 | orchestrator | 2025-08-29 15:22:43 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-08-29 15:22:43.799080 | orchestrator | 2025-08-29 15:22:43 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-08-29 15:22:43.799091 | orchestrator | 2025-08-29 15:22:43 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-08-29 15:22:44.528184 | orchestrator | ok: Runtime: 0:03:12.376287 2025-08-29 15:22:44.549323 | 2025-08-29 15:22:44.549525 | TASK [Run checks] 2025-08-29 15:22:45.244330 | orchestrator | + set -e 2025-08-29 15:22:45.350582 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 15:22:45.350657 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 15:22:45.350711 | orchestrator | ++ INTERACTIVE=false 2025-08-29 15:22:45.350722 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 15:22:45.350731 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 15:22:45.350760 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 15:22:45.350804 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 15:22:45.350833 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:22:45.350849 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:22:45.350862 | orchestrator | 2025-08-29 15:22:45.350876 | orchestrator | # CHECK 2025-08-29 15:22:45.350889 | orchestrator | 2025-08-29 15:22:45.350899 | orchestrator | + echo 2025-08-29 15:22:45.350914 | orchestrator | + echo '# CHECK' 2025-08-29 15:22:45.350921 | orchestrator | + echo 2025-08-29 15:22:45.350933 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:22:45.350941 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:22:45.350949 | orchestrator | 2025-08-29 15:22:45.350956 | orchestrator | ## Containers @ testbed-manager 2025-08-29 15:22:45.350964 | orchestrator | 2025-08-29 15:22:45.350973 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:22:45.350980 | orchestrator | + echo 2025-08-29 15:22:45.350988 | orchestrator | + echo '## Containers @ testbed-manager' 2025-08-29 15:22:45.350995 | orchestrator | + echo 2025-08-29 15:22:45.351003 | orchestrator | + osism container testbed-manager ps 2025-08-29 15:22:47.588922 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:22:47.589007 | orchestrator | fe7cf6fbcbc5 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-08-29 15:22:47.589020 | orchestrator | a35f8768ff68 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-08-29 15:22:47.589026 | orchestrator | 3a3a077d3070 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-08-29 15:22:47.589036 | orchestrator | 572db661419b registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-08-29 15:22:47.589041 | orchestrator | 3182fb46947f registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-08-29 15:22:47.589047 | orchestrator | abae0ddb782f registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2025-08-29 15:22:47.589055 | orchestrator | 41ae738b981e registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 15:22:47.589061 | orchestrator | 51b3573d118b registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 15:22:47.589082 | orchestrator | 120a6a3babd0 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 15:22:47.589087 | orchestrator | 784b06be161f phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-08-29 15:22:47.589093 | orchestrator | 33df68d91228 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 31 minutes openstackclient 2025-08-29 15:22:47.589098 | orchestrator | 6498b604bb46 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-08-29 15:22:47.589103 | orchestrator | 31992fec6979 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 54 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-08-29 15:22:47.589111 | orchestrator | 9340d6f2038c registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 59 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-08-29 15:22:47.589127 | orchestrator | 16f875f6d20e registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-ansible 2025-08-29 15:22:47.589133 | orchestrator | a4ef83908ccb registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-08-29 15:22:47.589138 | orchestrator | 9964baa6c978 registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-08-29 15:22:47.589143 | orchestrator | dee9e81d6ca7 registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 59 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-08-29 15:22:47.589148 | orchestrator | d1e10bcd5bd2 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 59 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-08-29 15:22:47.589154 | orchestrator | 253b7f727bde registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-08-29 15:22:47.589159 | orchestrator | 0491bf7e75fb registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-08-29 15:22:47.589164 | orchestrator | e763a2b73621 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-08-29 15:22:47.589173 | orchestrator | 04414a1199f6 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-08-29 15:22:47.589178 | orchestrator | 894a8a75713b registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-08-29 15:22:47.589184 | orchestrator | 87230b21ed49 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-08-29 15:22:47.589189 | orchestrator | 5d73ca41b046 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 59 minutes ago Up 39 minutes (healthy) osismclient 2025-08-29 15:22:47.589194 | orchestrator | 276f9147c909 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-08-29 15:22:47.589199 | orchestrator | 30088b7a8186 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-08-29 15:22:47.876700 | orchestrator | 2025-08-29 15:22:47.876761 | orchestrator | ## Images @ testbed-manager 2025-08-29 15:22:47.876771 | orchestrator | 2025-08-29 15:22:47.876777 | orchestrator | + echo 2025-08-29 15:22:47.876784 | orchestrator | + echo '## Images @ testbed-manager' 2025-08-29 15:22:47.876790 | orchestrator | + echo 2025-08-29 15:22:47.876796 | orchestrator | + osism container testbed-manager images 2025-08-29 15:22:50.073068 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:22:50.073157 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e303c4555969 8 hours ago 237MB 2025-08-29 15:22:50.073169 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 3 weeks ago 11.5MB 2025-08-29 15:22:50.073175 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 6 weeks ago 571MB 2025-08-29 15:22:50.073181 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:22:50.073209 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:22:50.073215 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:22:50.073221 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 6 weeks ago 891MB 2025-08-29 15:22:50.073226 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 6 weeks ago 360MB 2025-08-29 15:22:50.073231 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:22:50.073237 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 6 weeks ago 456MB 2025-08-29 15:22:50.073242 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:22:50.073247 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 6 weeks ago 575MB 2025-08-29 15:22:50.073271 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 6 weeks ago 535MB 2025-08-29 15:22:50.073277 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 6 weeks ago 308MB 2025-08-29 15:22:50.073282 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 6 weeks ago 1.21GB 2025-08-29 15:22:50.073297 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 7 weeks ago 310MB 2025-08-29 15:22:50.073303 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 7 weeks ago 41.4MB 2025-08-29 15:22:50.073308 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-08-29 15:22:50.073313 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 2 months ago 329MB 2025-08-29 15:22:50.073318 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 months ago 453MB 2025-08-29 15:22:50.073324 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-08-29 15:22:50.073329 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 11 months ago 300MB 2025-08-29 15:22:50.073334 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 14 months ago 146MB 2025-08-29 15:22:50.378848 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:22:50.379885 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:22:50.446632 | orchestrator | 2025-08-29 15:22:50.446727 | orchestrator | ## Containers @ testbed-node-0 2025-08-29 15:22:50.446735 | orchestrator | 2025-08-29 15:22:50.446740 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:22:50.446744 | orchestrator | + echo 2025-08-29 15:22:50.446749 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-08-29 15:22:50.446754 | orchestrator | + echo 2025-08-29 15:22:50.446758 | orchestrator | + osism container testbed-node-0 ps 2025-08-29 15:22:52.928080 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:22:52.928150 | orchestrator | 4802299151aa registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 15:22:52.928158 | orchestrator | abed6b3de510 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 15:22:52.928163 | orchestrator | dc71f8bb4830 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 15:22:52.928167 | orchestrator | 88d4cd401527 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-08-29 15:22:52.928172 | orchestrator | 825faf2fe038 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 15:22:52.928176 | orchestrator | fd1e4bcb3278 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-08-29 15:22:52.928180 | orchestrator | 7d601d4ea2a0 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-08-29 15:22:52.928185 | orchestrator | 836fbdbe6167 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-08-29 15:22:52.928198 | orchestrator | a74763263201 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-08-29 15:22:52.928202 | orchestrator | fa75b9b01426 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-08-29 15:22:52.928206 | orchestrator | 99105c81f739 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-08-29 15:22:52.928209 | orchestrator | 397ec7f535d9 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-08-29 15:22:52.928213 | orchestrator | 073ee1b65915 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-08-29 15:22:52.928217 | orchestrator | 20a559df6297 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 15:22:52.928221 | orchestrator | 45530e7ed7c6 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-08-29 15:22:52.928225 | orchestrator | f10e14d8d616 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-08-29 15:22:52.928235 | orchestrator | 0a6f783ba69c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-08-29 15:22:52.929000 | orchestrator | 798fe1b6437a registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-08-29 15:22:52.929021 | orchestrator | 37fde2f2cc2c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-08-29 15:22:52.929025 | orchestrator | 738d25da845a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-08-29 15:22:52.929030 | orchestrator | c422703032c9 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-08-29 15:22:52.929035 | orchestrator | 4670b050b9bf registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-08-29 15:22:52.929039 | orchestrator | 7132a732c032 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 15:22:52.929042 | orchestrator | 6f269dfb2492 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-08-29 15:22:52.929047 | orchestrator | 24cb34c7ecea registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-08-29 15:22:52.929060 | orchestrator | 7891c728a7be registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-08-29 15:22:52.929075 | orchestrator | 85d7c5353794 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-08-29 15:22:52.929080 | orchestrator | ea854eaa9828 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-08-29 15:22:52.929084 | orchestrator | 445c1bbf8be1 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-08-29 15:22:52.929088 | orchestrator | 59ee9bdf56ac registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-08-29 15:22:52.929094 | orchestrator | 1ae5f7e4b3a9 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-08-29 15:22:52.929098 | orchestrator | 5447c0b2e755 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-08-29 15:22:52.929102 | orchestrator | 9fc6cd33ca7e registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-08-29 15:22:52.929106 | orchestrator | 04a21cbf52b3 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-08-29 15:22:52.929110 | orchestrator | 2e384d918b52 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-08-29 15:22:52.929114 | orchestrator | 85f3f6d6a51a registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-08-29 15:22:52.929118 | orchestrator | 2eb09ee5c9cb registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-08-29 15:22:52.929124 | orchestrator | ef0d341afb78 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-08-29 15:22:52.929135 | orchestrator | 12a7d406bbd9 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-08-29 15:22:52.929140 | orchestrator | a31659cdc3dd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-08-29 15:22:52.929144 | orchestrator | 857bb04d9bb0 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 15:22:52.929147 | orchestrator | e7c4fcb78945 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 15:22:52.929151 | orchestrator | ced7ab23e246 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-08-29 15:22:52.929155 | orchestrator | 35785d4e3c37 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 15:22:52.929165 | orchestrator | c9d2d4c0e62f registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 15:22:52.929169 | orchestrator | 2fbd21fefab3 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-08-29 15:22:52.929173 | orchestrator | 20c269cc0960 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-08-29 15:22:52.929177 | orchestrator | 070bd6071033 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-08-29 15:22:52.929181 | orchestrator | cfe9f7295217 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-08-29 15:22:52.929185 | orchestrator | 5a87e8dda08d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 15:22:52.929189 | orchestrator | 285a8060e8d0 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 15:22:52.929193 | orchestrator | ce69a1cbf81c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 15:22:52.929197 | orchestrator | 707b9dd3a9a3 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 15:22:52.929201 | orchestrator | d11bbfbf5525 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-08-29 15:22:52.929205 | orchestrator | cbc096323f0f registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 15:22:52.929209 | orchestrator | 456bb0a643de registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-08-29 15:22:52.929213 | orchestrator | 314374d86fbd registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 31 minutes fluentd 2025-08-29 15:22:53.224835 | orchestrator | 2025-08-29 15:22:53.224957 | orchestrator | ## Images @ testbed-node-0 2025-08-29 15:22:53.224970 | orchestrator | 2025-08-29 15:22:53.224981 | orchestrator | + echo 2025-08-29 15:22:53.224991 | orchestrator | + echo '## Images @ testbed-node-0' 2025-08-29 15:22:53.225002 | orchestrator | + echo 2025-08-29 15:22:53.225011 | orchestrator | + osism container testbed-node-0 images 2025-08-29 15:22:55.443603 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:22:55.443674 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:22:55.443680 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 15:22:55.443685 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 15:22:55.443702 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 15:22:55.443706 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 15:22:55.443726 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 15:22:55.443730 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 15:22:55.443734 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 15:22:55.443739 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:22:55.443742 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 15:22:55.443752 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:22:55.443756 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 15:22:55.443760 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 15:22:55.443764 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 15:22:55.443768 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 15:22:55.443772 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:22:55.443776 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 15:22:55.443780 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:22:55.443783 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 15:22:55.443787 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 15:22:55.443791 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 15:22:55.443795 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 15:22:55.443798 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 15:22:55.443802 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 15:22:55.443806 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 15:22:55.443810 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 15:22:55.443814 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 6 weeks ago 1.04GB 2025-08-29 15:22:55.443817 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 6 weeks ago 1.04GB 2025-08-29 15:22:55.443821 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 15:22:55.443825 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 15:22:55.443829 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 15:22:55.443846 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 15:22:55.443850 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 15:22:55.443854 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 15:22:55.443858 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 15:22:55.443864 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 15:22:55.443868 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 15:22:55.443872 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 15:22:55.443876 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 15:22:55.443880 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 15:22:55.443883 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 15:22:55.443887 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 15:22:55.443891 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 15:22:55.443895 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 15:22:55.443898 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 15:22:55.443902 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 15:22:55.443906 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 15:22:55.443910 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 15:22:55.443913 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 15:22:55.443917 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 15:22:55.443921 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 15:22:55.443925 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 15:22:55.443929 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 6 weeks ago 1.11GB 2025-08-29 15:22:55.443932 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 6 weeks ago 1.11GB 2025-08-29 15:22:55.443936 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 15:22:55.443940 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 15:22:55.443944 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 15:22:55.443947 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 15:22:55.443954 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 6 weeks ago 1.04GB 2025-08-29 15:22:55.443958 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 6 weeks ago 1.04GB 2025-08-29 15:22:55.443962 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 6 weeks ago 1.04GB 2025-08-29 15:22:55.443966 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 6 weeks ago 1.04GB 2025-08-29 15:22:55.443970 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 15:22:55.739957 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:22:55.746085 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:22:55.797887 | orchestrator | 2025-08-29 15:22:55.797968 | orchestrator | ## Containers @ testbed-node-1 2025-08-29 15:22:55.797978 | orchestrator | 2025-08-29 15:22:55.797986 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:22:55.797993 | orchestrator | + echo 2025-08-29 15:22:55.798001 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-08-29 15:22:55.798008 | orchestrator | + echo 2025-08-29 15:22:55.798036 | orchestrator | + osism container testbed-node-1 ps 2025-08-29 15:22:58.170543 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:22:58.170635 | orchestrator | 17ce8e8b8c44 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 15:22:58.170645 | orchestrator | 5af387b21a72 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 15:22:58.170653 | orchestrator | 0d948e891424 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 15:22:58.170660 | orchestrator | f0d4ac617ee9 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-08-29 15:22:58.170666 | orchestrator | c3ae3ecbd66f registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 15:22:58.170673 | orchestrator | 70e93c597e39 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-08-29 15:22:58.170679 | orchestrator | 43d32b3900d9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-08-29 15:22:58.170686 | orchestrator | c4ebeb28c968 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-08-29 15:22:58.170693 | orchestrator | cb26384aee05 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-08-29 15:22:58.170740 | orchestrator | e4132e53f821 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-08-29 15:22:58.170750 | orchestrator | eec08df54c12 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-08-29 15:22:58.170786 | orchestrator | 2f6953a7a072 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-08-29 15:22:58.170793 | orchestrator | 9792c4ef8788 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-08-29 15:22:58.170800 | orchestrator | 1457af38953f registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 15:22:58.170806 | orchestrator | e231dae5591e registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-08-29 15:22:58.170813 | orchestrator | c789547c21b0 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-08-29 15:22:58.170819 | orchestrator | 07e50e45f1e5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-08-29 15:22:58.170826 | orchestrator | e8ed71b109fa registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-08-29 15:22:58.170849 | orchestrator | f0a1e778b62e registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-08-29 15:22:58.170868 | orchestrator | d9deb2c106eb registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-08-29 15:22:58.170875 | orchestrator | 66be61b46dea registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-08-29 15:22:58.170881 | orchestrator | 6ffdeb708d5e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-08-29 15:22:58.170888 | orchestrator | 37e84c1272d6 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 15:22:58.170895 | orchestrator | 1e688e321db8 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-08-29 15:22:58.170901 | orchestrator | 379837852b33 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-08-29 15:22:58.170907 | orchestrator | d40def23fd03 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-08-29 15:22:58.170916 | orchestrator | e10dc3d36291 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-08-29 15:22:58.170922 | orchestrator | 8e5f188b7fac registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-08-29 15:22:58.170929 | orchestrator | 2a933ac26def registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-08-29 15:22:58.170942 | orchestrator | 40e84e19a6bd registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-08-29 15:22:58.170948 | orchestrator | 9cd44a8a4250 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-08-29 15:22:58.170955 | orchestrator | e7ee7147f250 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-08-29 15:22:58.170961 | orchestrator | 10c6e89d1ed7 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-08-29 15:22:58.170968 | orchestrator | 0af6f0b18ba8 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-08-29 15:22:58.170974 | orchestrator | 411ccfb71344 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-08-29 15:22:58.170980 | orchestrator | bf4398e3c3d4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-08-29 15:22:58.170987 | orchestrator | 826074cba226 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-08-29 15:22:58.170993 | orchestrator | 81ebab4ed7c7 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-08-29 15:22:58.171000 | orchestrator | 274d440a96ff registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-08-29 15:22:58.171006 | orchestrator | 0d9b454b66b4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-08-29 15:22:58.171018 | orchestrator | b27ec9fe2eb0 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 15:22:58.171029 | orchestrator | 72614bf729af registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 15:22:58.171035 | orchestrator | 28519e73c751 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-08-29 15:22:58.171042 | orchestrator | 72420d14cc50 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 15:22:58.171048 | orchestrator | 0c7cc945005c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 15:22:58.171055 | orchestrator | 8a8800a8eddc registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 15:22:58.171061 | orchestrator | 39d964c9877b registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 15:22:58.171068 | orchestrator | b537b185dc58 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-08-29 15:22:58.171079 | orchestrator | 3eefe679e0ec registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-08-29 15:22:58.171086 | orchestrator | e661574b5236 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 15:22:58.171092 | orchestrator | 63b48b802773 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 15:22:58.171099 | orchestrator | bece89269d11 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 15:22:58.171105 | orchestrator | b9cd8b0b41f5 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 15:22:58.171112 | orchestrator | 4dbb7b411540 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-08-29 15:22:58.171118 | orchestrator | af05f1cb306a registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 15:22:58.171124 | orchestrator | 8f9ac80df1da registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 15:22:58.171131 | orchestrator | ba758a3b647e registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 15:22:58.469031 | orchestrator | 2025-08-29 15:22:58.469120 | orchestrator | ## Images @ testbed-node-1 2025-08-29 15:22:58.469136 | orchestrator | 2025-08-29 15:22:58.469149 | orchestrator | + echo 2025-08-29 15:22:58.469161 | orchestrator | + echo '## Images @ testbed-node-1' 2025-08-29 15:22:58.469173 | orchestrator | + echo 2025-08-29 15:22:58.469185 | orchestrator | + osism container testbed-node-1 images 2025-08-29 15:23:00.636604 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:23:00.636775 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:23:00.638187 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 15:23:00.638282 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 15:23:00.638296 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 15:23:00.638308 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 15:23:00.638319 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 15:23:00.638331 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 15:23:00.638343 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 15:23:00.638354 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:23:00.638365 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 15:23:00.638407 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:23:00.638420 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 15:23:00.638431 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 15:23:00.638442 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 15:23:00.638471 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 15:23:00.638483 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:23:00.638494 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 15:23:00.638505 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:23:00.638516 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 15:23:00.638528 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 15:23:00.638539 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 15:23:00.638550 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 15:23:00.638561 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 15:23:00.638572 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 15:23:00.638589 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 15:23:00.638607 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 15:23:00.638619 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 15:23:00.638630 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 15:23:00.638641 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 15:23:00.638652 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 15:23:00.638663 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 15:23:00.638727 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 15:23:00.638740 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 15:23:00.638752 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 15:23:00.638763 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 15:23:00.638774 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 15:23:00.638793 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 15:23:00.638805 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 15:23:00.638816 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 15:23:00.638827 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 15:23:00.638838 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 15:23:00.638849 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 15:23:00.638860 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 15:23:00.638872 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 15:23:00.638883 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 15:23:00.638893 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 15:23:00.638904 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 15:23:00.638915 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 15:23:00.638926 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 15:23:00.638937 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 15:23:00.638948 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 15:23:00.638959 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 15:23:00.638970 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 15:23:00.638981 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 15:23:00.638993 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 15:23:00.917352 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 15:23:00.918391 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:23:00.951242 | orchestrator | 2025-08-29 15:23:00.951305 | orchestrator | ## Containers @ testbed-node-2 2025-08-29 15:23:00.951318 | orchestrator | 2025-08-29 15:23:00.951328 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:23:00.951336 | orchestrator | + echo 2025-08-29 15:23:00.951345 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-08-29 15:23:00.951354 | orchestrator | + echo 2025-08-29 15:23:00.951363 | orchestrator | + osism container testbed-node-2 ps 2025-08-29 15:23:03.231182 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 15:23:03.231302 | orchestrator | 7c852dfda7c2 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 15:23:03.231319 | orchestrator | c82648069d9d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 15:23:03.231351 | orchestrator | a04fc0e69ac2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 15:23:03.231363 | orchestrator | 8aa4f47b3137 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-08-29 15:23:03.231374 | orchestrator | a5d3567cacd1 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 15:23:03.231386 | orchestrator | e7dde1eeceaa registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-08-29 15:23:03.231397 | orchestrator | 726d3e133a30 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-08-29 15:23:03.231408 | orchestrator | abf8f371bd59 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-08-29 15:23:03.231419 | orchestrator | a20c906afe81 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-08-29 15:23:03.231430 | orchestrator | 8a4fdf3aa5b5 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) placement_api 2025-08-29 15:23:03.231441 | orchestrator | a2a23d7561d4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_mdns 2025-08-29 15:23:03.231453 | orchestrator | 2ec29ec5d9ae registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-08-29 15:23:03.231464 | orchestrator | b38892de225e registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-08-29 15:23:03.231475 | orchestrator | 8579bd4ea389 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 15:23:03.231486 | orchestrator | c02442f42b5d registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-08-29 15:23:03.231497 | orchestrator | 05e9d13c20df registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-08-29 15:23:03.231508 | orchestrator | c0d6f58542bc registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-08-29 15:23:03.231519 | orchestrator | 6dc2407fa31e registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_conductor 2025-08-29 15:23:03.231530 | orchestrator | a1df79a3dfdb registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-08-29 15:23:03.231556 | orchestrator | 59bcf05ddf7e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-08-29 15:23:03.231576 | orchestrator | 0f6e9aaaa1ca registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-08-29 15:23:03.231587 | orchestrator | cd6bfd05d345 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-08-29 15:23:03.231598 | orchestrator | 56cec7edd093 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 15:23:03.231609 | orchestrator | 037d5e1fb4b3 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-08-29 15:23:03.231621 | orchestrator | 6944d32ec76a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-08-29 15:23:03.231633 | orchestrator | 191a395329f6 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-08-29 15:23:03.231645 | orchestrator | 024c8f1cb935 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-08-29 15:23:03.231656 | orchestrator | e0a3a7a0b9d4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-08-29 15:23:03.231667 | orchestrator | dde378a6a7c3 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-08-29 15:23:03.231679 | orchestrator | dcb2dee31f46 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-08-29 15:23:03.231690 | orchestrator | bd2cb9683046 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-08-29 15:23:03.231701 | orchestrator | 4823ef37d46d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-08-29 15:23:03.231785 | orchestrator | afb42ab09ddd registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-08-29 15:23:03.231808 | orchestrator | b3f3a37ee151 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-08-29 15:23:03.231822 | orchestrator | b227722e53c0 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-08-29 15:23:03.231835 | orchestrator | 859ccaf9f8b5 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-08-29 15:23:03.231848 | orchestrator | 833e20149c5f registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-08-29 15:23:03.231860 | orchestrator | dda7fce5ca06 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-08-29 15:23:03.231880 | orchestrator | 9c275bcd266d registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-08-29 15:23:03.231892 | orchestrator | a6ae9426b29b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-08-29 15:23:03.231919 | orchestrator | 9e7e8b8b8ae2 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 15:23:03.231934 | orchestrator | be58584fc0f0 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-08-29 15:23:03.231947 | orchestrator | d1a630063c94 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-08-29 15:23:03.231960 | orchestrator | e048b3ff2fcf registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-08-29 15:23:03.231973 | orchestrator | 0e984d3b0799 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 15:23:03.231986 | orchestrator | d99a472be20b registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 15:23:03.231999 | orchestrator | 97ccbcf14de1 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 15:23:03.232012 | orchestrator | afa13ef2d3f9 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-08-29 15:23:03.232025 | orchestrator | 9874f98c4097 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-08-29 15:23:03.232038 | orchestrator | 63c0a55e528d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 15:23:03.232051 | orchestrator | 4f3a94a447a8 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 15:23:03.232064 | orchestrator | ad0545146758 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 15:23:03.232077 | orchestrator | 00c80cb9da01 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 15:23:03.232090 | orchestrator | 2ad138421637 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-08-29 15:23:03.232102 | orchestrator | f9f7a91f3bf1 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 15:23:03.232113 | orchestrator | cc24d94059b8 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 15:23:03.232124 | orchestrator | 50e6f459044c registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 15:23:03.496111 | orchestrator | 2025-08-29 15:23:03.496211 | orchestrator | ## Images @ testbed-node-2 2025-08-29 15:23:03.496226 | orchestrator | 2025-08-29 15:23:03.496237 | orchestrator | + echo 2025-08-29 15:23:03.496248 | orchestrator | + echo '## Images @ testbed-node-2' 2025-08-29 15:23:03.496259 | orchestrator | + echo 2025-08-29 15:23:03.496269 | orchestrator | + osism container testbed-node-2 images 2025-08-29 15:23:05.711262 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 15:23:05.711388 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 15:23:05.711404 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 15:23:05.711416 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 15:23:05.711428 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 15:23:05.711440 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 15:23:05.711452 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 15:23:05.711463 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 15:23:05.711475 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 15:23:05.711538 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 15:23:05.711553 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 15:23:05.711565 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 15:23:05.711577 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 15:23:05.711589 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 15:23:05.711602 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 15:23:05.711614 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 15:23:05.711626 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 15:23:05.711638 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 15:23:05.711650 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 15:23:05.711661 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 15:23:05.711693 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 15:23:05.711705 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 15:23:05.711759 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 15:23:05.711770 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 15:23:05.711805 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 15:23:05.711820 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 15:23:05.711832 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 15:23:05.711845 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 15:23:05.711858 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 15:23:05.711871 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 15:23:05.711884 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 15:23:05.711896 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 15:23:05.711928 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 15:23:05.711942 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 15:23:05.711955 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 15:23:05.711968 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 15:23:05.711981 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 15:23:05.711993 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 15:23:05.712011 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 15:23:05.712024 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 15:23:05.712037 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 15:23:05.712050 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 15:23:05.712063 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 15:23:05.712075 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 15:23:05.712088 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 15:23:05.712101 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 15:23:05.712113 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 15:23:05.712127 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 15:23:05.712140 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 15:23:05.712153 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 15:23:05.712171 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 15:23:05.712182 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 15:23:05.712193 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 15:23:05.712205 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 15:23:05.712216 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 15:23:05.712227 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 15:23:06.130251 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-08-29 15:23:06.137341 | orchestrator | + set -e 2025-08-29 15:23:06.137388 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 15:23:06.138430 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 15:23:06.138451 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 15:23:06.138462 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 15:23:06.138471 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 15:23:06.138482 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 15:23:06.138492 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 15:23:06.138502 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:23:06.138512 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:23:06.138522 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 15:23:06.138532 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 15:23:06.138542 | orchestrator | ++ export ARA=false 2025-08-29 15:23:06.138552 | orchestrator | ++ ARA=false 2025-08-29 15:23:06.138562 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 15:23:06.138572 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 15:23:06.138581 | orchestrator | ++ export TEMPEST=false 2025-08-29 15:23:06.138591 | orchestrator | ++ TEMPEST=false 2025-08-29 15:23:06.138601 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 15:23:06.138611 | orchestrator | ++ IS_ZUUL=true 2025-08-29 15:23:06.138620 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 15:23:06.138635 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 15:23:06.138646 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 15:23:06.138656 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 15:23:06.138666 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 15:23:06.138676 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 15:23:06.138686 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 15:23:06.138696 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 15:23:06.138706 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 15:23:06.138744 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 15:23:06.138754 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 15:23:06.138765 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-08-29 15:23:06.149860 | orchestrator | + set -e 2025-08-29 15:23:06.150666 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 15:23:06.150745 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 15:23:06.150761 | orchestrator | ++ INTERACTIVE=false 2025-08-29 15:23:06.150772 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 15:23:06.150781 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 15:23:06.150792 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 15:23:06.151355 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 15:23:06.158154 | orchestrator | 2025-08-29 15:23:06.158211 | orchestrator | # Ceph status 2025-08-29 15:23:06.158222 | orchestrator | 2025-08-29 15:23:06.158233 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:23:06.158244 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:23:06.158254 | orchestrator | + echo 2025-08-29 15:23:06.158264 | orchestrator | + echo '# Ceph status' 2025-08-29 15:23:06.158274 | orchestrator | + echo 2025-08-29 15:23:06.158284 | orchestrator | + ceph -s 2025-08-29 15:23:06.830258 | orchestrator | cluster: 2025-08-29 15:23:06.830359 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-08-29 15:23:06.830375 | orchestrator | health: HEALTH_OK 2025-08-29 15:23:06.830387 | orchestrator | 2025-08-29 15:23:06.830399 | orchestrator | services: 2025-08-29 15:23:06.830437 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-08-29 15:23:06.830462 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2025-08-29 15:23:06.830474 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-08-29 15:23:06.830485 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-08-29 15:23:06.830497 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-08-29 15:23:06.830508 | orchestrator | 2025-08-29 15:23:06.830519 | orchestrator | data: 2025-08-29 15:23:06.830530 | orchestrator | volumes: 1/1 healthy 2025-08-29 15:23:06.830542 | orchestrator | pools: 14 pools, 401 pgs 2025-08-29 15:23:06.830553 | orchestrator | objects: 524 objects, 2.2 GiB 2025-08-29 15:23:06.830564 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-08-29 15:23:06.830576 | orchestrator | pgs: 401 active+clean 2025-08-29 15:23:06.830587 | orchestrator | 2025-08-29 15:23:06.892032 | orchestrator | 2025-08-29 15:23:06.892111 | orchestrator | # Ceph versions 2025-08-29 15:23:06.892125 | orchestrator | 2025-08-29 15:23:06.892137 | orchestrator | + echo 2025-08-29 15:23:06.892149 | orchestrator | + echo '# Ceph versions' 2025-08-29 15:23:06.892161 | orchestrator | + echo 2025-08-29 15:23:06.892172 | orchestrator | + ceph versions 2025-08-29 15:23:07.507177 | orchestrator | { 2025-08-29 15:23:07.507313 | orchestrator | "mon": { 2025-08-29 15:23:07.507331 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:23:07.507344 | orchestrator | }, 2025-08-29 15:23:07.507355 | orchestrator | "mgr": { 2025-08-29 15:23:07.507367 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:23:07.507378 | orchestrator | }, 2025-08-29 15:23:07.507389 | orchestrator | "osd": { 2025-08-29 15:23:07.507401 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-08-29 15:23:07.507413 | orchestrator | }, 2025-08-29 15:23:07.507424 | orchestrator | "mds": { 2025-08-29 15:23:07.507435 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:23:07.507446 | orchestrator | }, 2025-08-29 15:23:07.507457 | orchestrator | "rgw": { 2025-08-29 15:23:07.507468 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 15:23:07.507479 | orchestrator | }, 2025-08-29 15:23:07.507491 | orchestrator | "overall": { 2025-08-29 15:23:07.507502 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-08-29 15:23:07.507514 | orchestrator | } 2025-08-29 15:23:07.507525 | orchestrator | } 2025-08-29 15:23:07.553686 | orchestrator | 2025-08-29 15:23:07.553807 | orchestrator | # Ceph OSD tree 2025-08-29 15:23:07.553822 | orchestrator | 2025-08-29 15:23:07.553834 | orchestrator | + echo 2025-08-29 15:23:07.553846 | orchestrator | + echo '# Ceph OSD tree' 2025-08-29 15:23:07.553858 | orchestrator | + echo 2025-08-29 15:23:07.553869 | orchestrator | + ceph osd df tree 2025-08-29 15:23:08.069054 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-08-29 15:23:08.069141 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-08-29 15:23:08.069147 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-08-29 15:23:08.069152 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.61 1.12 201 up osd.0 2025-08-29 15:23:08.069157 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 995 MiB 1 KiB 74 MiB 19 GiB 5.22 0.88 189 up osd.5 2025-08-29 15:23:08.069161 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-08-29 15:23:08.069177 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.81 0.98 192 up osd.2 2025-08-29 15:23:08.069181 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 6.03 1.02 200 up osd.3 2025-08-29 15:23:08.069185 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-08-29 15:23:08.069209 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.39 1.08 192 up osd.1 2025-08-29 15:23:08.069215 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.44 0.92 196 up osd.4 2025-08-29 15:23:08.069221 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-08-29 15:23:08.069228 | orchestrator | MIN/MAX VAR: 0.88/1.12 STDDEV: 0.49 2025-08-29 15:23:08.125229 | orchestrator | 2025-08-29 15:23:08.125322 | orchestrator | # Ceph monitor status 2025-08-29 15:23:08.125336 | orchestrator | 2025-08-29 15:23:08.125346 | orchestrator | + echo 2025-08-29 15:23:08.125356 | orchestrator | + echo '# Ceph monitor status' 2025-08-29 15:23:08.125367 | orchestrator | + echo 2025-08-29 15:23:08.125377 | orchestrator | + ceph mon stat 2025-08-29 15:23:08.751886 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-08-29 15:23:08.800299 | orchestrator | 2025-08-29 15:23:08.800373 | orchestrator | # Ceph quorum status 2025-08-29 15:23:08.800383 | orchestrator | 2025-08-29 15:23:08.800391 | orchestrator | + echo 2025-08-29 15:23:08.800398 | orchestrator | + echo '# Ceph quorum status' 2025-08-29 15:23:08.800405 | orchestrator | + echo 2025-08-29 15:23:08.801033 | orchestrator | + jq 2025-08-29 15:23:08.801059 | orchestrator | + ceph quorum_status 2025-08-29 15:23:09.458964 | orchestrator | { 2025-08-29 15:23:09.459075 | orchestrator | "election_epoch": 6, 2025-08-29 15:23:09.459092 | orchestrator | "quorum": [ 2025-08-29 15:23:09.459105 | orchestrator | 0, 2025-08-29 15:23:09.459117 | orchestrator | 1, 2025-08-29 15:23:09.459128 | orchestrator | 2 2025-08-29 15:23:09.459139 | orchestrator | ], 2025-08-29 15:23:09.459151 | orchestrator | "quorum_names": [ 2025-08-29 15:23:09.459162 | orchestrator | "testbed-node-0", 2025-08-29 15:23:09.459173 | orchestrator | "testbed-node-1", 2025-08-29 15:23:09.459184 | orchestrator | "testbed-node-2" 2025-08-29 15:23:09.459195 | orchestrator | ], 2025-08-29 15:23:09.459206 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-08-29 15:23:09.459219 | orchestrator | "quorum_age": 1702, 2025-08-29 15:23:09.459230 | orchestrator | "features": { 2025-08-29 15:23:09.459241 | orchestrator | "quorum_con": "4540138322906710015", 2025-08-29 15:23:09.459252 | orchestrator | "quorum_mon": [ 2025-08-29 15:23:09.459263 | orchestrator | "kraken", 2025-08-29 15:23:09.459274 | orchestrator | "luminous", 2025-08-29 15:23:09.459285 | orchestrator | "mimic", 2025-08-29 15:23:09.459296 | orchestrator | "osdmap-prune", 2025-08-29 15:23:09.459307 | orchestrator | "nautilus", 2025-08-29 15:23:09.459318 | orchestrator | "octopus", 2025-08-29 15:23:09.459329 | orchestrator | "pacific", 2025-08-29 15:23:09.459340 | orchestrator | "elector-pinging", 2025-08-29 15:23:09.459365 | orchestrator | "quincy", 2025-08-29 15:23:09.459376 | orchestrator | "reef" 2025-08-29 15:23:09.459388 | orchestrator | ] 2025-08-29 15:23:09.459399 | orchestrator | }, 2025-08-29 15:23:09.459410 | orchestrator | "monmap": { 2025-08-29 15:23:09.459422 | orchestrator | "epoch": 1, 2025-08-29 15:23:09.459433 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-08-29 15:23:09.459445 | orchestrator | "modified": "2025-08-29T14:54:24.331506Z", 2025-08-29 15:23:09.459458 | orchestrator | "created": "2025-08-29T14:54:24.331506Z", 2025-08-29 15:23:09.459477 | orchestrator | "min_mon_release": 18, 2025-08-29 15:23:09.459495 | orchestrator | "min_mon_release_name": "reef", 2025-08-29 15:23:09.459514 | orchestrator | "election_strategy": 1, 2025-08-29 15:23:09.459532 | orchestrator | "disallowed_leaders: ": "", 2025-08-29 15:23:09.459549 | orchestrator | "stretch_mode": false, 2025-08-29 15:23:09.459566 | orchestrator | "tiebreaker_mon": "", 2025-08-29 15:23:09.459583 | orchestrator | "removed_ranks: ": "", 2025-08-29 15:23:09.459601 | orchestrator | "features": { 2025-08-29 15:23:09.459618 | orchestrator | "persistent": [ 2025-08-29 15:23:09.459636 | orchestrator | "kraken", 2025-08-29 15:23:09.459653 | orchestrator | "luminous", 2025-08-29 15:23:09.459672 | orchestrator | "mimic", 2025-08-29 15:23:09.459690 | orchestrator | "osdmap-prune", 2025-08-29 15:23:09.459708 | orchestrator | "nautilus", 2025-08-29 15:23:09.459756 | orchestrator | "octopus", 2025-08-29 15:23:09.459769 | orchestrator | "pacific", 2025-08-29 15:23:09.459798 | orchestrator | "elector-pinging", 2025-08-29 15:23:09.459833 | orchestrator | "quincy", 2025-08-29 15:23:09.459845 | orchestrator | "reef" 2025-08-29 15:23:09.459856 | orchestrator | ], 2025-08-29 15:23:09.459867 | orchestrator | "optional": [] 2025-08-29 15:23:09.459878 | orchestrator | }, 2025-08-29 15:23:09.459890 | orchestrator | "mons": [ 2025-08-29 15:23:09.459901 | orchestrator | { 2025-08-29 15:23:09.459912 | orchestrator | "rank": 0, 2025-08-29 15:23:09.459923 | orchestrator | "name": "testbed-node-0", 2025-08-29 15:23:09.459934 | orchestrator | "public_addrs": { 2025-08-29 15:23:09.459946 | orchestrator | "addrvec": [ 2025-08-29 15:23:09.459957 | orchestrator | { 2025-08-29 15:23:09.459968 | orchestrator | "type": "v2", 2025-08-29 15:23:09.459979 | orchestrator | "addr": "192.168.16.10:3300", 2025-08-29 15:23:09.459990 | orchestrator | "nonce": 0 2025-08-29 15:23:09.460001 | orchestrator | }, 2025-08-29 15:23:09.460012 | orchestrator | { 2025-08-29 15:23:09.460023 | orchestrator | "type": "v1", 2025-08-29 15:23:09.460034 | orchestrator | "addr": "192.168.16.10:6789", 2025-08-29 15:23:09.460045 | orchestrator | "nonce": 0 2025-08-29 15:23:09.460056 | orchestrator | } 2025-08-29 15:23:09.460067 | orchestrator | ] 2025-08-29 15:23:09.460078 | orchestrator | }, 2025-08-29 15:23:09.460089 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-08-29 15:23:09.460101 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-08-29 15:23:09.460112 | orchestrator | "priority": 0, 2025-08-29 15:23:09.460123 | orchestrator | "weight": 0, 2025-08-29 15:23:09.460134 | orchestrator | "crush_location": "{}" 2025-08-29 15:23:09.460145 | orchestrator | }, 2025-08-29 15:23:09.460156 | orchestrator | { 2025-08-29 15:23:09.460167 | orchestrator | "rank": 1, 2025-08-29 15:23:09.460178 | orchestrator | "name": "testbed-node-1", 2025-08-29 15:23:09.460189 | orchestrator | "public_addrs": { 2025-08-29 15:23:09.460200 | orchestrator | "addrvec": [ 2025-08-29 15:23:09.460211 | orchestrator | { 2025-08-29 15:23:09.460222 | orchestrator | "type": "v2", 2025-08-29 15:23:09.460233 | orchestrator | "addr": "192.168.16.11:3300", 2025-08-29 15:23:09.460244 | orchestrator | "nonce": 0 2025-08-29 15:23:09.460255 | orchestrator | }, 2025-08-29 15:23:09.460266 | orchestrator | { 2025-08-29 15:23:09.460277 | orchestrator | "type": "v1", 2025-08-29 15:23:09.460288 | orchestrator | "addr": "192.168.16.11:6789", 2025-08-29 15:23:09.460299 | orchestrator | "nonce": 0 2025-08-29 15:23:09.460310 | orchestrator | } 2025-08-29 15:23:09.460321 | orchestrator | ] 2025-08-29 15:23:09.460332 | orchestrator | }, 2025-08-29 15:23:09.460344 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-08-29 15:23:09.460355 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-08-29 15:23:09.460365 | orchestrator | "priority": 0, 2025-08-29 15:23:09.460377 | orchestrator | "weight": 0, 2025-08-29 15:23:09.460388 | orchestrator | "crush_location": "{}" 2025-08-29 15:23:09.460402 | orchestrator | }, 2025-08-29 15:23:09.460421 | orchestrator | { 2025-08-29 15:23:09.460437 | orchestrator | "rank": 2, 2025-08-29 15:23:09.460455 | orchestrator | "name": "testbed-node-2", 2025-08-29 15:23:09.460472 | orchestrator | "public_addrs": { 2025-08-29 15:23:09.460489 | orchestrator | "addrvec": [ 2025-08-29 15:23:09.460509 | orchestrator | { 2025-08-29 15:23:09.460521 | orchestrator | "type": "v2", 2025-08-29 15:23:09.460532 | orchestrator | "addr": "192.168.16.12:3300", 2025-08-29 15:23:09.460543 | orchestrator | "nonce": 0 2025-08-29 15:23:09.460568 | orchestrator | }, 2025-08-29 15:23:09.460579 | orchestrator | { 2025-08-29 15:23:09.460590 | orchestrator | "type": "v1", 2025-08-29 15:23:09.460601 | orchestrator | "addr": "192.168.16.12:6789", 2025-08-29 15:23:09.460612 | orchestrator | "nonce": 0 2025-08-29 15:23:09.460623 | orchestrator | } 2025-08-29 15:23:09.460634 | orchestrator | ] 2025-08-29 15:23:09.460645 | orchestrator | }, 2025-08-29 15:23:09.460656 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-08-29 15:23:09.460667 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-08-29 15:23:09.460678 | orchestrator | "priority": 0, 2025-08-29 15:23:09.460689 | orchestrator | "weight": 0, 2025-08-29 15:23:09.460700 | orchestrator | "crush_location": "{}" 2025-08-29 15:23:09.460711 | orchestrator | } 2025-08-29 15:23:09.460742 | orchestrator | ] 2025-08-29 15:23:09.460755 | orchestrator | } 2025-08-29 15:23:09.460766 | orchestrator | } 2025-08-29 15:23:09.460914 | orchestrator | 2025-08-29 15:23:09.460930 | orchestrator | # Ceph free space status 2025-08-29 15:23:09.460957 | orchestrator | 2025-08-29 15:23:09.460968 | orchestrator | + echo 2025-08-29 15:23:09.460979 | orchestrator | + echo '# Ceph free space status' 2025-08-29 15:23:09.460990 | orchestrator | + echo 2025-08-29 15:23:09.461002 | orchestrator | + ceph df 2025-08-29 15:23:10.037463 | orchestrator | --- RAW STORAGE --- 2025-08-29 15:23:10.037564 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-08-29 15:23:10.037588 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 15:23:10.037599 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 15:23:10.037609 | orchestrator | 2025-08-29 15:23:10.037619 | orchestrator | --- POOLS --- 2025-08-29 15:23:10.037630 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-08-29 15:23:10.037642 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-08-29 15:23:10.037653 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:23:10.037663 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-08-29 15:23:10.037672 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:23:10.037682 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:23:10.037691 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-08-29 15:23:10.037701 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-08-29 15:23:10.037710 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-08-29 15:23:10.037720 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-08-29 15:23:10.037765 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:23:10.037774 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:23:10.037784 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2025-08-29 15:23:10.037794 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:23:10.037804 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 15:23:10.090358 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 15:23:10.152840 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 15:23:10.152919 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-08-29 15:23:10.152928 | orchestrator | + osism apply facts 2025-08-29 15:23:22.238917 | orchestrator | 2025-08-29 15:23:22 | INFO  | Task c39fdb23-8b3d-4dc9-be98-8a614ce99815 (facts) was prepared for execution. 2025-08-29 15:23:22.239071 | orchestrator | 2025-08-29 15:23:22 | INFO  | It takes a moment until task c39fdb23-8b3d-4dc9-be98-8a614ce99815 (facts) has been started and output is visible here. 2025-08-29 15:23:36.545471 | orchestrator | 2025-08-29 15:23:36.545625 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 15:23:36.545645 | orchestrator | 2025-08-29 15:23:36.545657 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 15:23:36.545669 | orchestrator | Friday 29 August 2025 15:23:26 +0000 (0:00:00.288) 0:00:00.288 ********* 2025-08-29 15:23:36.545681 | orchestrator | ok: [testbed-manager] 2025-08-29 15:23:36.545693 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:23:36.545704 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:23:36.545715 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:23:36.545725 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:23:36.545737 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:23:36.545747 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:23:36.545760 | orchestrator | 2025-08-29 15:23:36.545851 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 15:23:36.545882 | orchestrator | Friday 29 August 2025 15:23:28 +0000 (0:00:01.541) 0:00:01.829 ********* 2025-08-29 15:23:36.545903 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:23:36.545922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:23:36.545943 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:23:36.545982 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:23:36.545996 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:23:36.546008 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:23:36.546088 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:23:36.546104 | orchestrator | 2025-08-29 15:23:36.546117 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 15:23:36.546129 | orchestrator | 2025-08-29 15:23:36.546143 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 15:23:36.546155 | orchestrator | Friday 29 August 2025 15:23:29 +0000 (0:00:01.416) 0:00:03.246 ********* 2025-08-29 15:23:36.546168 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:23:36.546179 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:23:36.546190 | orchestrator | ok: [testbed-manager] 2025-08-29 15:23:36.546201 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:23:36.546212 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:23:36.546223 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:23:36.546234 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:23:36.546246 | orchestrator | 2025-08-29 15:23:36.546258 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 15:23:36.546269 | orchestrator | 2025-08-29 15:23:36.546281 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 15:23:36.546292 | orchestrator | Friday 29 August 2025 15:23:35 +0000 (0:00:05.872) 0:00:09.118 ********* 2025-08-29 15:23:36.546303 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:23:36.546314 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:23:36.546325 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:23:36.546336 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:23:36.546347 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:23:36.546359 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:23:36.546370 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:23:36.546381 | orchestrator | 2025-08-29 15:23:36.546392 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:23:36.546404 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546417 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546432 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546454 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546473 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546492 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546516 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:23:36.546542 | orchestrator | 2025-08-29 15:23:36.546562 | orchestrator | 2025-08-29 15:23:36.546582 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:23:36.546625 | orchestrator | Friday 29 August 2025 15:23:36 +0000 (0:00:00.666) 0:00:09.785 ********* 2025-08-29 15:23:36.546638 | orchestrator | =============================================================================== 2025-08-29 15:23:36.546649 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.87s 2025-08-29 15:23:36.546660 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.54s 2025-08-29 15:23:36.546671 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.42s 2025-08-29 15:23:36.546682 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-08-29 15:23:36.888958 | orchestrator | + osism validate ceph-mons 2025-08-29 15:24:10.684179 | orchestrator | 2025-08-29 15:24:10.684285 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-08-29 15:24:10.684301 | orchestrator | 2025-08-29 15:24:10.684313 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 15:24:10.684325 | orchestrator | Friday 29 August 2025 15:23:54 +0000 (0:00:00.498) 0:00:00.498 ********* 2025-08-29 15:24:10.684338 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:10.684349 | orchestrator | 2025-08-29 15:24:10.684360 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 15:24:10.684372 | orchestrator | Friday 29 August 2025 15:23:54 +0000 (0:00:00.696) 0:00:01.194 ********* 2025-08-29 15:24:10.684383 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:10.684394 | orchestrator | 2025-08-29 15:24:10.684406 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 15:24:10.684417 | orchestrator | Friday 29 August 2025 15:23:55 +0000 (0:00:00.935) 0:00:02.130 ********* 2025-08-29 15:24:10.684428 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.684440 | orchestrator | 2025-08-29 15:24:10.684451 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 15:24:10.684463 | orchestrator | Friday 29 August 2025 15:23:56 +0000 (0:00:00.292) 0:00:02.423 ********* 2025-08-29 15:24:10.684487 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.684499 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:10.684511 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:10.684532 | orchestrator | 2025-08-29 15:24:10.684552 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 15:24:10.684572 | orchestrator | Friday 29 August 2025 15:23:56 +0000 (0:00:00.333) 0:00:02.757 ********* 2025-08-29 15:24:10.684590 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:10.684611 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.684633 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:10.684653 | orchestrator | 2025-08-29 15:24:10.684666 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 15:24:10.684677 | orchestrator | Friday 29 August 2025 15:23:57 +0000 (0:00:01.017) 0:00:03.774 ********* 2025-08-29 15:24:10.684689 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.684700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:24:10.684711 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:24:10.684725 | orchestrator | 2025-08-29 15:24:10.684738 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 15:24:10.684750 | orchestrator | Friday 29 August 2025 15:23:57 +0000 (0:00:00.357) 0:00:04.131 ********* 2025-08-29 15:24:10.684763 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.684775 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:10.684787 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:10.684799 | orchestrator | 2025-08-29 15:24:10.684811 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:24:10.684824 | orchestrator | Friday 29 August 2025 15:23:58 +0000 (0:00:00.598) 0:00:04.729 ********* 2025-08-29 15:24:10.684836 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.684849 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:10.684890 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:10.684904 | orchestrator | 2025-08-29 15:24:10.684916 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-08-29 15:24:10.684928 | orchestrator | Friday 29 August 2025 15:23:58 +0000 (0:00:00.340) 0:00:05.070 ********* 2025-08-29 15:24:10.684940 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.684953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:24:10.684965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:24:10.684977 | orchestrator | 2025-08-29 15:24:10.684990 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-08-29 15:24:10.685003 | orchestrator | Friday 29 August 2025 15:23:59 +0000 (0:00:00.324) 0:00:05.394 ********* 2025-08-29 15:24:10.685038 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685052 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:10.685064 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:10.685077 | orchestrator | 2025-08-29 15:24:10.685088 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:24:10.685099 | orchestrator | Friday 29 August 2025 15:23:59 +0000 (0:00:00.347) 0:00:05.742 ********* 2025-08-29 15:24:10.685110 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685121 | orchestrator | 2025-08-29 15:24:10.685132 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:24:10.685143 | orchestrator | Friday 29 August 2025 15:24:00 +0000 (0:00:00.719) 0:00:06.461 ********* 2025-08-29 15:24:10.685154 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685165 | orchestrator | 2025-08-29 15:24:10.685177 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:24:10.685188 | orchestrator | Friday 29 August 2025 15:24:00 +0000 (0:00:00.296) 0:00:06.758 ********* 2025-08-29 15:24:10.685199 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685210 | orchestrator | 2025-08-29 15:24:10.685221 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:10.685232 | orchestrator | Friday 29 August 2025 15:24:00 +0000 (0:00:00.266) 0:00:07.024 ********* 2025-08-29 15:24:10.685243 | orchestrator | 2025-08-29 15:24:10.685254 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:10.685265 | orchestrator | Friday 29 August 2025 15:24:00 +0000 (0:00:00.070) 0:00:07.094 ********* 2025-08-29 15:24:10.685276 | orchestrator | 2025-08-29 15:24:10.685287 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:10.685298 | orchestrator | Friday 29 August 2025 15:24:00 +0000 (0:00:00.069) 0:00:07.164 ********* 2025-08-29 15:24:10.685309 | orchestrator | 2025-08-29 15:24:10.685320 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:24:10.685331 | orchestrator | Friday 29 August 2025 15:24:00 +0000 (0:00:00.074) 0:00:07.238 ********* 2025-08-29 15:24:10.685342 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685353 | orchestrator | 2025-08-29 15:24:10.685364 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 15:24:10.685375 | orchestrator | Friday 29 August 2025 15:24:01 +0000 (0:00:00.280) 0:00:07.518 ********* 2025-08-29 15:24:10.685386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685397 | orchestrator | 2025-08-29 15:24:10.685425 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-08-29 15:24:10.685437 | orchestrator | Friday 29 August 2025 15:24:01 +0000 (0:00:00.303) 0:00:07.821 ********* 2025-08-29 15:24:10.685449 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685460 | orchestrator | 2025-08-29 15:24:10.685471 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-08-29 15:24:10.685482 | orchestrator | Friday 29 August 2025 15:24:01 +0000 (0:00:00.126) 0:00:07.948 ********* 2025-08-29 15:24:10.685493 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:24:10.685505 | orchestrator | 2025-08-29 15:24:10.685521 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-08-29 15:24:10.685539 | orchestrator | Friday 29 August 2025 15:24:03 +0000 (0:00:01.612) 0:00:09.560 ********* 2025-08-29 15:24:10.685557 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685575 | orchestrator | 2025-08-29 15:24:10.685593 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-08-29 15:24:10.685613 | orchestrator | Friday 29 August 2025 15:24:03 +0000 (0:00:00.331) 0:00:09.892 ********* 2025-08-29 15:24:10.685633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685651 | orchestrator | 2025-08-29 15:24:10.685673 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-08-29 15:24:10.685700 | orchestrator | Friday 29 August 2025 15:24:03 +0000 (0:00:00.351) 0:00:10.244 ********* 2025-08-29 15:24:10.685716 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685739 | orchestrator | 2025-08-29 15:24:10.685750 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-08-29 15:24:10.685762 | orchestrator | Friday 29 August 2025 15:24:04 +0000 (0:00:00.342) 0:00:10.587 ********* 2025-08-29 15:24:10.685773 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685784 | orchestrator | 2025-08-29 15:24:10.685795 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-08-29 15:24:10.685806 | orchestrator | Friday 29 August 2025 15:24:04 +0000 (0:00:00.319) 0:00:10.907 ********* 2025-08-29 15:24:10.685817 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.685828 | orchestrator | 2025-08-29 15:24:10.685839 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-08-29 15:24:10.685850 | orchestrator | Friday 29 August 2025 15:24:04 +0000 (0:00:00.115) 0:00:11.022 ********* 2025-08-29 15:24:10.685888 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685903 | orchestrator | 2025-08-29 15:24:10.685914 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-08-29 15:24:10.685926 | orchestrator | Friday 29 August 2025 15:24:04 +0000 (0:00:00.121) 0:00:11.143 ********* 2025-08-29 15:24:10.685937 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.685948 | orchestrator | 2025-08-29 15:24:10.685959 | orchestrator | TASK [Gather status data] ****************************************************** 2025-08-29 15:24:10.685971 | orchestrator | Friday 29 August 2025 15:24:04 +0000 (0:00:00.127) 0:00:11.271 ********* 2025-08-29 15:24:10.685982 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:24:10.685993 | orchestrator | 2025-08-29 15:24:10.686005 | orchestrator | TASK [Set health test data] **************************************************** 2025-08-29 15:24:10.686054 | orchestrator | Friday 29 August 2025 15:24:06 +0000 (0:00:01.398) 0:00:12.669 ********* 2025-08-29 15:24:10.686069 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.686080 | orchestrator | 2025-08-29 15:24:10.686092 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-08-29 15:24:10.686103 | orchestrator | Friday 29 August 2025 15:24:06 +0000 (0:00:00.340) 0:00:13.010 ********* 2025-08-29 15:24:10.686114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.686126 | orchestrator | 2025-08-29 15:24:10.686137 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-08-29 15:24:10.686148 | orchestrator | Friday 29 August 2025 15:24:06 +0000 (0:00:00.195) 0:00:13.205 ********* 2025-08-29 15:24:10.686159 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:10.686171 | orchestrator | 2025-08-29 15:24:10.686182 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-08-29 15:24:10.686193 | orchestrator | Friday 29 August 2025 15:24:06 +0000 (0:00:00.137) 0:00:13.343 ********* 2025-08-29 15:24:10.686204 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.686216 | orchestrator | 2025-08-29 15:24:10.686227 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-08-29 15:24:10.686238 | orchestrator | Friday 29 August 2025 15:24:07 +0000 (0:00:00.170) 0:00:13.513 ********* 2025-08-29 15:24:10.686249 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.686261 | orchestrator | 2025-08-29 15:24:10.686272 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 15:24:10.686284 | orchestrator | Friday 29 August 2025 15:24:07 +0000 (0:00:00.384) 0:00:13.898 ********* 2025-08-29 15:24:10.686295 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:10.686306 | orchestrator | 2025-08-29 15:24:10.686317 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 15:24:10.686328 | orchestrator | Friday 29 August 2025 15:24:07 +0000 (0:00:00.267) 0:00:14.165 ********* 2025-08-29 15:24:10.686340 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:10.686351 | orchestrator | 2025-08-29 15:24:10.686362 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:24:10.686373 | orchestrator | Friday 29 August 2025 15:24:08 +0000 (0:00:00.313) 0:00:14.479 ********* 2025-08-29 15:24:10.686392 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:10.686404 | orchestrator | 2025-08-29 15:24:10.686415 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:24:10.686426 | orchestrator | Friday 29 August 2025 15:24:09 +0000 (0:00:01.703) 0:00:16.183 ********* 2025-08-29 15:24:10.686437 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:10.686451 | orchestrator | 2025-08-29 15:24:10.686470 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:24:10.686488 | orchestrator | Friday 29 August 2025 15:24:10 +0000 (0:00:00.278) 0:00:16.462 ********* 2025-08-29 15:24:10.686504 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:10.686522 | orchestrator | 2025-08-29 15:24:10.686553 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:13.396668 | orchestrator | Friday 29 August 2025 15:24:10 +0000 (0:00:00.278) 0:00:16.740 ********* 2025-08-29 15:24:13.396712 | orchestrator | 2025-08-29 15:24:13.396718 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:13.396723 | orchestrator | Friday 29 August 2025 15:24:10 +0000 (0:00:00.089) 0:00:16.829 ********* 2025-08-29 15:24:13.396727 | orchestrator | 2025-08-29 15:24:13.396731 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:13.396736 | orchestrator | Friday 29 August 2025 15:24:10 +0000 (0:00:00.093) 0:00:16.922 ********* 2025-08-29 15:24:13.396741 | orchestrator | 2025-08-29 15:24:13.396746 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 15:24:13.396750 | orchestrator | Friday 29 August 2025 15:24:10 +0000 (0:00:00.093) 0:00:17.016 ********* 2025-08-29 15:24:13.396755 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:13.396759 | orchestrator | 2025-08-29 15:24:13.396763 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:24:13.396767 | orchestrator | Friday 29 August 2025 15:24:12 +0000 (0:00:01.639) 0:00:18.656 ********* 2025-08-29 15:24:13.396771 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 15:24:13.396775 | orchestrator |  "msg": [ 2025-08-29 15:24:13.396780 | orchestrator |  "Validator run completed.", 2025-08-29 15:24:13.396785 | orchestrator |  "You can find the report file here:", 2025-08-29 15:24:13.396789 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-08-29T15:23:54+00:00-report.json", 2025-08-29 15:24:13.396794 | orchestrator |  "on the following host:", 2025-08-29 15:24:13.396799 | orchestrator |  "testbed-manager" 2025-08-29 15:24:13.396803 | orchestrator |  ] 2025-08-29 15:24:13.396807 | orchestrator | } 2025-08-29 15:24:13.396811 | orchestrator | 2025-08-29 15:24:13.396815 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:24:13.396820 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 15:24:13.396833 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:24:13.396837 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:24:13.396841 | orchestrator | 2025-08-29 15:24:13.396845 | orchestrator | 2025-08-29 15:24:13.396849 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:24:13.396853 | orchestrator | Friday 29 August 2025 15:24:13 +0000 (0:00:00.711) 0:00:19.367 ********* 2025-08-29 15:24:13.396857 | orchestrator | =============================================================================== 2025-08-29 15:24:13.396861 | orchestrator | Aggregate test results step one ----------------------------------------- 1.70s 2025-08-29 15:24:13.396883 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2025-08-29 15:24:13.396900 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.61s 2025-08-29 15:24:13.396904 | orchestrator | Gather status data ------------------------------------------------------ 1.40s 2025-08-29 15:24:13.396908 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-08-29 15:24:13.396912 | orchestrator | Create report output directory ------------------------------------------ 0.94s 2025-08-29 15:24:13.396915 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2025-08-29 15:24:13.396919 | orchestrator | Print report file information ------------------------------------------- 0.71s 2025-08-29 15:24:13.396923 | orchestrator | Get timestamp for report file ------------------------------------------- 0.70s 2025-08-29 15:24:13.396927 | orchestrator | Set test result to passed if container is existing ---------------------- 0.60s 2025-08-29 15:24:13.396931 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.38s 2025-08-29 15:24:13.396935 | orchestrator | Set test result to failed if container is missing ----------------------- 0.36s 2025-08-29 15:24:13.396938 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.35s 2025-08-29 15:24:13.396942 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.35s 2025-08-29 15:24:13.396946 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-08-29 15:24:13.396950 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2025-08-29 15:24:13.396954 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2025-08-29 15:24:13.396958 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2025-08-29 15:24:13.396962 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-08-29 15:24:13.396966 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2025-08-29 15:24:13.756744 | orchestrator | + osism validate ceph-mgrs 2025-08-29 15:24:36.498979 | orchestrator | 2025-08-29 15:24:36.499087 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-08-29 15:24:36.499102 | orchestrator | 2025-08-29 15:24:36.499113 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 15:24:36.499124 | orchestrator | Friday 29 August 2025 15:24:20 +0000 (0:00:00.514) 0:00:00.514 ********* 2025-08-29 15:24:36.499136 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.499146 | orchestrator | 2025-08-29 15:24:36.499156 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 15:24:36.499166 | orchestrator | Friday 29 August 2025 15:24:21 +0000 (0:00:00.706) 0:00:01.221 ********* 2025-08-29 15:24:36.499177 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.499187 | orchestrator | 2025-08-29 15:24:36.499197 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 15:24:36.499207 | orchestrator | Friday 29 August 2025 15:24:22 +0000 (0:00:00.910) 0:00:02.131 ********* 2025-08-29 15:24:36.499217 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.499228 | orchestrator | 2025-08-29 15:24:36.499238 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 15:24:36.499248 | orchestrator | Friday 29 August 2025 15:24:22 +0000 (0:00:00.274) 0:00:02.405 ********* 2025-08-29 15:24:36.499258 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.499268 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:36.499278 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:36.499288 | orchestrator | 2025-08-29 15:24:36.499299 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 15:24:36.499309 | orchestrator | Friday 29 August 2025 15:24:22 +0000 (0:00:00.354) 0:00:02.760 ********* 2025-08-29 15:24:36.499319 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:36.499329 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:36.499339 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.499349 | orchestrator | 2025-08-29 15:24:36.499376 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 15:24:36.499404 | orchestrator | Friday 29 August 2025 15:24:23 +0000 (0:00:01.069) 0:00:03.829 ********* 2025-08-29 15:24:36.499415 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.499426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:24:36.499435 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:24:36.499446 | orchestrator | 2025-08-29 15:24:36.499458 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 15:24:36.499470 | orchestrator | Friday 29 August 2025 15:24:24 +0000 (0:00:00.377) 0:00:04.207 ********* 2025-08-29 15:24:36.499481 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.499493 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:36.499504 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:36.499515 | orchestrator | 2025-08-29 15:24:36.499527 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:24:36.499538 | orchestrator | Friday 29 August 2025 15:24:24 +0000 (0:00:00.584) 0:00:04.792 ********* 2025-08-29 15:24:36.499549 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.499560 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:36.499571 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:36.499583 | orchestrator | 2025-08-29 15:24:36.499594 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-08-29 15:24:36.499605 | orchestrator | Friday 29 August 2025 15:24:25 +0000 (0:00:00.333) 0:00:05.126 ********* 2025-08-29 15:24:36.499616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.499627 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:24:36.499639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:24:36.499650 | orchestrator | 2025-08-29 15:24:36.499661 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-08-29 15:24:36.499672 | orchestrator | Friday 29 August 2025 15:24:25 +0000 (0:00:00.352) 0:00:05.478 ********* 2025-08-29 15:24:36.499684 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.499695 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:24:36.499706 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:24:36.499717 | orchestrator | 2025-08-29 15:24:36.499728 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:24:36.499739 | orchestrator | Friday 29 August 2025 15:24:25 +0000 (0:00:00.337) 0:00:05.816 ********* 2025-08-29 15:24:36.499751 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.499761 | orchestrator | 2025-08-29 15:24:36.499772 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:24:36.499784 | orchestrator | Friday 29 August 2025 15:24:26 +0000 (0:00:00.749) 0:00:06.565 ********* 2025-08-29 15:24:36.499795 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.499806 | orchestrator | 2025-08-29 15:24:36.499817 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:24:36.499828 | orchestrator | Friday 29 August 2025 15:24:26 +0000 (0:00:00.256) 0:00:06.822 ********* 2025-08-29 15:24:36.499838 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.499848 | orchestrator | 2025-08-29 15:24:36.499858 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:36.499868 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.277) 0:00:07.099 ********* 2025-08-29 15:24:36.499878 | orchestrator | 2025-08-29 15:24:36.499888 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:36.499898 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.086) 0:00:07.186 ********* 2025-08-29 15:24:36.499908 | orchestrator | 2025-08-29 15:24:36.499941 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:36.499953 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.068) 0:00:07.255 ********* 2025-08-29 15:24:36.499963 | orchestrator | 2025-08-29 15:24:36.499973 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:24:36.499983 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.073) 0:00:07.329 ********* 2025-08-29 15:24:36.500009 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.500019 | orchestrator | 2025-08-29 15:24:36.500029 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 15:24:36.500039 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.256) 0:00:07.585 ********* 2025-08-29 15:24:36.500050 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.500060 | orchestrator | 2025-08-29 15:24:36.500088 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-08-29 15:24:36.500099 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.251) 0:00:07.837 ********* 2025-08-29 15:24:36.500109 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.500119 | orchestrator | 2025-08-29 15:24:36.500129 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-08-29 15:24:36.500139 | orchestrator | Friday 29 August 2025 15:24:27 +0000 (0:00:00.113) 0:00:07.950 ********* 2025-08-29 15:24:36.500149 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:24:36.500158 | orchestrator | 2025-08-29 15:24:36.500169 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-08-29 15:24:36.500178 | orchestrator | Friday 29 August 2025 15:24:29 +0000 (0:00:02.001) 0:00:09.952 ********* 2025-08-29 15:24:36.500188 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.500198 | orchestrator | 2025-08-29 15:24:36.500209 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-08-29 15:24:36.500219 | orchestrator | Friday 29 August 2025 15:24:30 +0000 (0:00:00.321) 0:00:10.273 ********* 2025-08-29 15:24:36.500229 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.500239 | orchestrator | 2025-08-29 15:24:36.500249 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-08-29 15:24:36.500259 | orchestrator | Friday 29 August 2025 15:24:31 +0000 (0:00:00.915) 0:00:11.189 ********* 2025-08-29 15:24:36.500269 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.500278 | orchestrator | 2025-08-29 15:24:36.500288 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-08-29 15:24:36.500298 | orchestrator | Friday 29 August 2025 15:24:31 +0000 (0:00:00.142) 0:00:11.331 ********* 2025-08-29 15:24:36.500308 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:24:36.500318 | orchestrator | 2025-08-29 15:24:36.500328 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 15:24:36.500338 | orchestrator | Friday 29 August 2025 15:24:31 +0000 (0:00:00.165) 0:00:11.497 ********* 2025-08-29 15:24:36.500348 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.500358 | orchestrator | 2025-08-29 15:24:36.500368 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 15:24:36.500378 | orchestrator | Friday 29 August 2025 15:24:31 +0000 (0:00:00.296) 0:00:11.794 ********* 2025-08-29 15:24:36.500388 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:24:36.500398 | orchestrator | 2025-08-29 15:24:36.500408 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:24:36.500418 | orchestrator | Friday 29 August 2025 15:24:32 +0000 (0:00:00.265) 0:00:12.060 ********* 2025-08-29 15:24:36.500428 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.500438 | orchestrator | 2025-08-29 15:24:36.500448 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:24:36.500458 | orchestrator | Friday 29 August 2025 15:24:33 +0000 (0:00:01.384) 0:00:13.444 ********* 2025-08-29 15:24:36.500468 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.500478 | orchestrator | 2025-08-29 15:24:36.500487 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:24:36.500497 | orchestrator | Friday 29 August 2025 15:24:33 +0000 (0:00:00.260) 0:00:13.705 ********* 2025-08-29 15:24:36.500507 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.500517 | orchestrator | 2025-08-29 15:24:36.500527 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:36.500543 | orchestrator | Friday 29 August 2025 15:24:33 +0000 (0:00:00.258) 0:00:13.963 ********* 2025-08-29 15:24:36.500553 | orchestrator | 2025-08-29 15:24:36.500563 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:36.500573 | orchestrator | Friday 29 August 2025 15:24:34 +0000 (0:00:00.075) 0:00:14.038 ********* 2025-08-29 15:24:36.500583 | orchestrator | 2025-08-29 15:24:36.500593 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:24:36.500603 | orchestrator | Friday 29 August 2025 15:24:34 +0000 (0:00:00.070) 0:00:14.108 ********* 2025-08-29 15:24:36.500613 | orchestrator | 2025-08-29 15:24:36.500622 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 15:24:36.500632 | orchestrator | Friday 29 August 2025 15:24:34 +0000 (0:00:00.077) 0:00:14.186 ********* 2025-08-29 15:24:36.500642 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:36.500652 | orchestrator | 2025-08-29 15:24:36.500662 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:24:36.500672 | orchestrator | Friday 29 August 2025 15:24:36 +0000 (0:00:01.837) 0:00:16.023 ********* 2025-08-29 15:24:36.500682 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 15:24:36.500692 | orchestrator |  "msg": [ 2025-08-29 15:24:36.500702 | orchestrator |  "Validator run completed.", 2025-08-29 15:24:36.500712 | orchestrator |  "You can find the report file here:", 2025-08-29 15:24:36.500722 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-08-29T15:24:21+00:00-report.json", 2025-08-29 15:24:36.500733 | orchestrator |  "on the following host:", 2025-08-29 15:24:36.500743 | orchestrator |  "testbed-manager" 2025-08-29 15:24:36.500753 | orchestrator |  ] 2025-08-29 15:24:36.500763 | orchestrator | } 2025-08-29 15:24:36.500774 | orchestrator | 2025-08-29 15:24:36.500784 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:24:36.500795 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:24:36.500806 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:24:36.500822 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:24:36.880143 | orchestrator | 2025-08-29 15:24:36.880245 | orchestrator | 2025-08-29 15:24:36.880260 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:24:36.880275 | orchestrator | Friday 29 August 2025 15:24:36 +0000 (0:00:00.445) 0:00:16.468 ********* 2025-08-29 15:24:36.880286 | orchestrator | =============================================================================== 2025-08-29 15:24:36.880298 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.00s 2025-08-29 15:24:36.880309 | orchestrator | Write report file ------------------------------------------------------- 1.84s 2025-08-29 15:24:36.880320 | orchestrator | Aggregate test results step one ----------------------------------------- 1.38s 2025-08-29 15:24:36.880331 | orchestrator | Get container info ------------------------------------------------------ 1.07s 2025-08-29 15:24:36.880342 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.92s 2025-08-29 15:24:36.880354 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-08-29 15:24:36.880365 | orchestrator | Aggregate test results step one ----------------------------------------- 0.75s 2025-08-29 15:24:36.880376 | orchestrator | Get timestamp for report file ------------------------------------------- 0.71s 2025-08-29 15:24:36.880387 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2025-08-29 15:24:36.880397 | orchestrator | Print report file information ------------------------------------------- 0.45s 2025-08-29 15:24:36.880408 | orchestrator | Set test result to failed if container is missing ----------------------- 0.38s 2025-08-29 15:24:36.880445 | orchestrator | Prepare test data for container existance test -------------------------- 0.35s 2025-08-29 15:24:36.880473 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.35s 2025-08-29 15:24:36.880490 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.34s 2025-08-29 15:24:36.880502 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-08-29 15:24:36.880513 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.32s 2025-08-29 15:24:36.880524 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2025-08-29 15:24:36.880535 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2025-08-29 15:24:36.880546 | orchestrator | Define report vars ------------------------------------------------------ 0.27s 2025-08-29 15:24:36.880557 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.27s 2025-08-29 15:24:37.199324 | orchestrator | + osism validate ceph-osds 2025-08-29 15:24:58.998819 | orchestrator | 2025-08-29 15:24:58.999025 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-08-29 15:24:58.999048 | orchestrator | 2025-08-29 15:24:58.999061 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 15:24:58.999073 | orchestrator | Friday 29 August 2025 15:24:54 +0000 (0:00:00.458) 0:00:00.458 ********* 2025-08-29 15:24:58.999086 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:58.999097 | orchestrator | 2025-08-29 15:24:58.999109 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 15:24:58.999120 | orchestrator | Friday 29 August 2025 15:24:54 +0000 (0:00:00.773) 0:00:01.232 ********* 2025-08-29 15:24:58.999132 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:58.999143 | orchestrator | 2025-08-29 15:24:58.999154 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 15:24:58.999165 | orchestrator | Friday 29 August 2025 15:24:55 +0000 (0:00:00.281) 0:00:01.513 ********* 2025-08-29 15:24:58.999176 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:24:58.999188 | orchestrator | 2025-08-29 15:24:58.999200 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 15:24:58.999211 | orchestrator | Friday 29 August 2025 15:24:56 +0000 (0:00:01.149) 0:00:02.663 ********* 2025-08-29 15:24:58.999223 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:24:58.999236 | orchestrator | 2025-08-29 15:24:58.999247 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 15:24:58.999258 | orchestrator | Friday 29 August 2025 15:24:56 +0000 (0:00:00.139) 0:00:02.802 ********* 2025-08-29 15:24:58.999270 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:24:58.999281 | orchestrator | 2025-08-29 15:24:58.999292 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 15:24:58.999303 | orchestrator | Friday 29 August 2025 15:24:56 +0000 (0:00:00.136) 0:00:02.939 ********* 2025-08-29 15:24:58.999314 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:24:58.999325 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:24:58.999337 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:24:58.999348 | orchestrator | 2025-08-29 15:24:58.999359 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 15:24:58.999370 | orchestrator | Friday 29 August 2025 15:24:56 +0000 (0:00:00.337) 0:00:03.276 ********* 2025-08-29 15:24:58.999381 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:24:58.999392 | orchestrator | 2025-08-29 15:24:58.999403 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 15:24:58.999415 | orchestrator | Friday 29 August 2025 15:24:57 +0000 (0:00:00.179) 0:00:03.456 ********* 2025-08-29 15:24:58.999426 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:24:58.999437 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:24:58.999448 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:24:58.999483 | orchestrator | 2025-08-29 15:24:58.999495 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-08-29 15:24:58.999507 | orchestrator | Friday 29 August 2025 15:24:57 +0000 (0:00:00.356) 0:00:03.812 ********* 2025-08-29 15:24:58.999518 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:24:58.999529 | orchestrator | 2025-08-29 15:24:58.999540 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:24:58.999551 | orchestrator | Friday 29 August 2025 15:24:58 +0000 (0:00:00.635) 0:00:04.448 ********* 2025-08-29 15:24:58.999562 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:24:58.999573 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:24:58.999584 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:24:58.999595 | orchestrator | 2025-08-29 15:24:58.999606 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-08-29 15:24:58.999617 | orchestrator | Friday 29 August 2025 15:24:58 +0000 (0:00:00.564) 0:00:05.012 ********* 2025-08-29 15:24:58.999632 | orchestrator | skipping: [testbed-node-3] => (item={'id': '08a2b09c13b952cb75036b647aedcd4c81192c087430099effe9536c7c7ca45a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 15:24:58.999646 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5cf94d8c9efedddb81cb6443b8a656dbefc96a9e5e625c9bb69d717d3d2d7816', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:58.999658 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8c092e8c048eaa50a00424ac452ccb69896a5fd78ccd578cdad43bbeb7d43e35', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:58.999686 | orchestrator | skipping: [testbed-node-3] => (item={'id': '504138c3e96c16226960f6d715ce0a83c682759814d32f84ec7070fbc013ce02', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:58.999699 | orchestrator | skipping: [testbed-node-3] => (item={'id': '95270794cae4ac4a44079c1ea16b03524e287a56145c214c8b1bb51ce1790419', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 15:24:58.999729 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c251d41a94ffbc84163f8ff594c5fd50a392777e1b380334999eebb015d91620', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 15:24:58.999742 | orchestrator | skipping: [testbed-node-3] => (item={'id': '98f6a7322bb12c588f917c8fbc371b61f6b824e20f743db7c8dfb18927f06405', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 15:24:58.999763 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c1fd0cbbaa083f527d35c0ce480a12fccbb86d77278a7682208e468d6593a060', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-08-29 15:24:58.999775 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6537be5c11a9694676347e1bc1ea843e9bff64bf0fff6ee7a42e311bff730bf7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:24:58.999786 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e2b213d823e597bc97e992237e4e9306343a8cc7082d773702841bec7fa10999', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 15:24:58.999807 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d7c1974e0b06e17ad7bfcaf3e29629dce15110076c644e032c38af4b65ca341', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 15:24:58.999819 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca3753f417bac002cf4b0083f8a10f89d4f2cce6f647c829add633d3257ebf4a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-08-29 15:24:58.999833 | orchestrator | ok: [testbed-node-3] => (item={'id': '623d570022f39d8386dd9332e7c9ac155491e014c193204d439e04f15b46f13c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-08-29 15:24:58.999845 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f63a26a7974316349e0aeae6d614fafc590aaa3d5a064b4a59e510b0a0ac0d20', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-08-29 15:24:58.999857 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7dc2b9256dddd5fe1c0f508ece83cc9782e26d7f0c77a4e2edc3574aa643f63', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-08-29 15:24:58.999868 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4c29ba0e0f498545b3848ed2ac7887cada11a26581208afdc86ef34e1aaca419', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 15:24:58.999881 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1c0afb662206ef1c552afbf70192bdc9322c5707c40de0c15ffd0022f5237ab9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-08-29 15:24:58.999892 | orchestrator | skipping: [testbed-node-3] => (item={'id': '52a77e2061a81aac5dec6ffaa88a446a63fc73e051590461f0e0a8ce20288ebf', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 15:24:58.999910 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e7e86e336433851875c35f768b15e8a2eea6c5f14625562bb1bc88a76b3e0b69', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 15:24:58.999922 | orchestrator | skipping: [testbed-node-3] => (item={'id': '074e5412ee858c472b80ed2dda56711a990ac88322130af7fe345e302355deaa', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 15:24:58.999940 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2045a0c976ef7707d133538b141b6bcacdbe74165b6d97b9d5986a04e152a0d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 15:24:59.273139 | orchestrator | skipping: [testbed-node-4] => (item={'id': '016ef0b857c6769c3b930054cd1aa4953d604d4555bc808c37e35d2e17f71ae2', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:59.273239 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9324f5e37d5f45d58d2a07f11b4264fa566a8a3ebb7f68938118a5ceee3ade3a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:59.273254 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ebdf49082f61364ae88b42fffb2999a82ac786d787e09ac75fa7fa55bb68b91', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:59.273288 | orchestrator | skipping: [testbed-node-4] => (item={'id': '70626aa7f11d673102359e52f0b2cdaa35b60a6a9a3c392f60db4c389b66e73d', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 15:24:59.273299 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6e389377474ee4ac546a63075695dbcc822eab5f506b5d3ae3fbd514190cdbf9', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 15:24:59.273310 | orchestrator | skipping: [testbed-node-4] => (item={'id': '07a07f36fafb9c5c48b741ca5c3cffb0430d482df7605b77de13026363ebb048', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 15:24:59.273322 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6f7cbb096f7aba99b5d07d30a013350b8e5b75fae8d271e5a7e78c2f1ca77e02', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-08-29 15:24:59.273333 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6331f9faaeb46460476190afa7c61cb89d63213e89cf69f52b04172b2a923f9f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:24:59.273343 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f70017759e993dbbbd0f88af3697a0af3da409c97da7dd9fc17b1ffec8c4ea78', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 15:24:59.273354 | orchestrator | skipping: [testbed-node-4] => (item={'id': '76c54884690a561cea0f05d7aaffc38df68494751b8619643aebcc59f63ba477', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 15:24:59.273365 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8e7eade9d9b1978f83d62ad413d1ff4efe9ba2d46d62e44115560c321f5496c4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-08-29 15:24:59.273377 | orchestrator | ok: [testbed-node-4] => (item={'id': '82f4efaa6bc6b4cdd1859bbefb54ceec5f55af9fdd07fe8db49618992eb16993', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-08-29 15:24:59.273389 | orchestrator | ok: [testbed-node-4] => (item={'id': 'cc2e6ef7ef6ccb20092da0922813db3920aa4d4bb19bc4f5f3846d3fdd27a1e2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-08-29 15:24:59.273399 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97a92f8dcdafaff9d1fc99fb91672b51c98332e17a63fc672e65125cfd0d4b65', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-08-29 15:24:59.273443 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0864b96ffc64ee0f3711d8e78b52e4b8d571531c2638c05a85ec4b3cf6448575', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 15:24:59.273456 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a08d41c5c25fc29049d039323f33851ad5fcbc2e5a6e2c5257cb420e07fa479', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-08-29 15:24:59.273489 | orchestrator | skipping: [testbed-node-4] => (item={'id': '81abd6cf81f3e071b887153df67a7907da4477b392fd748d601c426456842302', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 15:24:59.273512 | orchestrator | skipping: [testbed-node-4] => (item={'id': '182ae5899be7d7ab14fa028d923c1eeb065d5728b58c7d43f57ddd8ee8d70f60', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 15:24:59.273523 | orchestrator | skipping: [testbed-node-4] => (item={'id': '470fb8f4a061d513f08b2557ba550abf730bb161c8ae27566a5cabd6c99564d9', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 15:24:59.273533 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ffa3aec6fdd04d7f43300ee6a6b8af31c29aab1a4de02d786e16a59b22dde08c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 15:24:59.273543 | orchestrator | skipping: [testbed-node-5] => (item={'id': '649bc0b8ee3c2b0ac0742be52c8b678dbbeda6bdd1c91c89f147ca4ed55d93a3', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:59.273553 | orchestrator | skipping: [testbed-node-5] => (item={'id': '00e6d0d1d5c70e76f4a11f30d7c7f52b50b54eb75d6eac20a25c79c849d4544f', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:59.273563 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1a6ffd615d3ad46414e572b3ad0c53843ac44bbffee137a52b700d07e3f3b191', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 15:24:59.273573 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a5ca40797e4c601e32be6ab8fbf262322722ddea4fe3d4138fdd3850857546cb', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 15:24:59.273583 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a5bb73a5f73960e866aff13eab0c0d10beeb573b247443d64bc00452f7128314', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 15:24:59.273594 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b3821348677bb3cda00fceeef43fb5a61e0779b5e0e3306eb83f12a9fbf5012e', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 15:24:59.273609 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3daea68ca46163d72ee70b5abce08f858b15c01fdcabf3a0a1bf0a77e6228474', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-08-29 15:24:59.273620 | orchestrator | skipping: [testbed-node-5] => (item={'id': '210daff549e11a6e6741bac2692698155a1d5e494ff41fe1e9b94b070bfe9241', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-08-29 15:24:59.273633 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0df330dfa395a22c5464402b514a5dc4ca8b80d4039f631def372b70268c6b70', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 15:24:59.273658 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0bd62768f092681df149849cb3c94fb232fcf34c14b76eb5c5411cb7196b131a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 15:25:07.430599 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9796c1e74b6d472b56c21c49e062d29fced4413b0eb0eede2dad9eb371c670c7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-08-29 15:25:07.430748 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c2115fea01af2b359fb3d6cec18ba93357f946680d4038bf4fb537764024d14f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-08-29 15:25:07.430779 | orchestrator | ok: [testbed-node-5] => (item={'id': '58b4adfc94dbbe9798cb6157c6f74014082f9f851cb7e6adfaab4f959b4fe1e6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-08-29 15:25:07.430799 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b5a31b2f7351a1244d8dc8928eee4d81397a9d07442ab874f5ffb43acf6f9ce', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-08-29 15:25:07.430819 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9226fe5ca7ab92f230629f58cd65f15574a99a910b1c970eb90dad77e69eb2e3', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 15:25:07.430842 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e5d605d508a91bb28807585fa448573c72803e106410d884ec1e4a5265e0c2a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-08-29 15:25:07.430862 | orchestrator | skipping: [testbed-node-5] => (item={'id': '503b55266d97819243b80795b2e5d5e9d3692edb7e94a04fa42bc2916718c5f0', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 15:25:07.430881 | orchestrator | skipping: [testbed-node-5] => (item={'id': '454e725a2bc88acff652416cdcc13a648e606942008394cf695c2ca694ac5560', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 15:25:07.430901 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd8f3dcfc888dc3a358a33d3e65dbee24d1c0cc46c49d616207c59822b187ace', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 15:25:07.430917 | orchestrator | 2025-08-29 15:25:07.430930 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-08-29 15:25:07.430943 | orchestrator | Friday 29 August 2025 15:24:59 +0000 (0:00:00.587) 0:00:05.599 ********* 2025-08-29 15:25:07.430955 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.430967 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:07.431022 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:07.431035 | orchestrator | 2025-08-29 15:25:07.431046 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-08-29 15:25:07.431058 | orchestrator | Friday 29 August 2025 15:24:59 +0000 (0:00:00.318) 0:00:05.918 ********* 2025-08-29 15:25:07.431071 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431085 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:07.431097 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:07.431110 | orchestrator | 2025-08-29 15:25:07.431123 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-08-29 15:25:07.431135 | orchestrator | Friday 29 August 2025 15:24:59 +0000 (0:00:00.321) 0:00:06.239 ********* 2025-08-29 15:25:07.431148 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.431186 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:07.431214 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:07.431228 | orchestrator | 2025-08-29 15:25:07.431241 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:25:07.431253 | orchestrator | Friday 29 August 2025 15:25:00 +0000 (0:00:00.601) 0:00:06.841 ********* 2025-08-29 15:25:07.431266 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.431278 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:07.431291 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:07.431303 | orchestrator | 2025-08-29 15:25:07.431316 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-08-29 15:25:07.431328 | orchestrator | Friday 29 August 2025 15:25:00 +0000 (0:00:00.298) 0:00:07.139 ********* 2025-08-29 15:25:07.431341 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-08-29 15:25:07.431355 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-08-29 15:25:07.431369 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431381 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-08-29 15:25:07.431394 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-08-29 15:25:07.431429 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:07.431442 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-08-29 15:25:07.431455 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-08-29 15:25:07.431467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:07.431479 | orchestrator | 2025-08-29 15:25:07.431490 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-08-29 15:25:07.431501 | orchestrator | Friday 29 August 2025 15:25:01 +0000 (0:00:00.364) 0:00:07.504 ********* 2025-08-29 15:25:07.431513 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.431524 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:07.431536 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:07.431547 | orchestrator | 2025-08-29 15:25:07.431558 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 15:25:07.431569 | orchestrator | Friday 29 August 2025 15:25:01 +0000 (0:00:00.319) 0:00:07.824 ********* 2025-08-29 15:25:07.431580 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431592 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:07.431603 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:07.431614 | orchestrator | 2025-08-29 15:25:07.431625 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 15:25:07.431636 | orchestrator | Friday 29 August 2025 15:25:02 +0000 (0:00:00.554) 0:00:08.378 ********* 2025-08-29 15:25:07.431647 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431658 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:07.431669 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:07.431681 | orchestrator | 2025-08-29 15:25:07.431692 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-08-29 15:25:07.431703 | orchestrator | Friday 29 August 2025 15:25:02 +0000 (0:00:00.354) 0:00:08.733 ********* 2025-08-29 15:25:07.431714 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.431725 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:07.431736 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:07.431748 | orchestrator | 2025-08-29 15:25:07.431759 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:25:07.431770 | orchestrator | Friday 29 August 2025 15:25:02 +0000 (0:00:00.344) 0:00:09.078 ********* 2025-08-29 15:25:07.431781 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431792 | orchestrator | 2025-08-29 15:25:07.431803 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:25:07.431823 | orchestrator | Friday 29 August 2025 15:25:02 +0000 (0:00:00.260) 0:00:09.338 ********* 2025-08-29 15:25:07.431834 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431845 | orchestrator | 2025-08-29 15:25:07.431856 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:25:07.431867 | orchestrator | Friday 29 August 2025 15:25:03 +0000 (0:00:00.251) 0:00:09.590 ********* 2025-08-29 15:25:07.431878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.431890 | orchestrator | 2025-08-29 15:25:07.431901 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:07.431913 | orchestrator | Friday 29 August 2025 15:25:03 +0000 (0:00:00.289) 0:00:09.880 ********* 2025-08-29 15:25:07.431924 | orchestrator | 2025-08-29 15:25:07.431935 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:07.431946 | orchestrator | Friday 29 August 2025 15:25:03 +0000 (0:00:00.070) 0:00:09.950 ********* 2025-08-29 15:25:07.431957 | orchestrator | 2025-08-29 15:25:07.431969 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:07.431997 | orchestrator | Friday 29 August 2025 15:25:03 +0000 (0:00:00.075) 0:00:10.026 ********* 2025-08-29 15:25:07.432009 | orchestrator | 2025-08-29 15:25:07.432020 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:25:07.432031 | orchestrator | Friday 29 August 2025 15:25:03 +0000 (0:00:00.308) 0:00:10.334 ********* 2025-08-29 15:25:07.432042 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.432053 | orchestrator | 2025-08-29 15:25:07.432064 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-08-29 15:25:07.432075 | orchestrator | Friday 29 August 2025 15:25:04 +0000 (0:00:00.295) 0:00:10.629 ********* 2025-08-29 15:25:07.432086 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:07.432097 | orchestrator | 2025-08-29 15:25:07.432109 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:25:07.432120 | orchestrator | Friday 29 August 2025 15:25:04 +0000 (0:00:00.273) 0:00:10.903 ********* 2025-08-29 15:25:07.432131 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.432142 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:07.432153 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:07.432165 | orchestrator | 2025-08-29 15:25:07.432176 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-08-29 15:25:07.432187 | orchestrator | Friday 29 August 2025 15:25:04 +0000 (0:00:00.360) 0:00:11.264 ********* 2025-08-29 15:25:07.432198 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.432209 | orchestrator | 2025-08-29 15:25:07.432220 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-08-29 15:25:07.432232 | orchestrator | Friday 29 August 2025 15:25:05 +0000 (0:00:00.242) 0:00:11.506 ********* 2025-08-29 15:25:07.432243 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:25:07.432254 | orchestrator | 2025-08-29 15:25:07.432265 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-08-29 15:25:07.432276 | orchestrator | Friday 29 August 2025 15:25:06 +0000 (0:00:01.683) 0:00:13.190 ********* 2025-08-29 15:25:07.432288 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.432299 | orchestrator | 2025-08-29 15:25:07.432310 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-08-29 15:25:07.432321 | orchestrator | Friday 29 August 2025 15:25:06 +0000 (0:00:00.136) 0:00:13.326 ********* 2025-08-29 15:25:07.432333 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:07.432344 | orchestrator | 2025-08-29 15:25:07.432355 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-08-29 15:25:07.432366 | orchestrator | Friday 29 August 2025 15:25:07 +0000 (0:00:00.329) 0:00:13.655 ********* 2025-08-29 15:25:07.432384 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:21.121237 | orchestrator | 2025-08-29 15:25:21.121335 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-08-29 15:25:21.121350 | orchestrator | Friday 29 August 2025 15:25:07 +0000 (0:00:00.113) 0:00:13.769 ********* 2025-08-29 15:25:21.121383 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.121395 | orchestrator | 2025-08-29 15:25:21.121405 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:25:21.121415 | orchestrator | Friday 29 August 2025 15:25:07 +0000 (0:00:00.135) 0:00:13.904 ********* 2025-08-29 15:25:21.121425 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.121435 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.121445 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.121455 | orchestrator | 2025-08-29 15:25:21.121465 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-08-29 15:25:21.121475 | orchestrator | Friday 29 August 2025 15:25:08 +0000 (0:00:00.580) 0:00:14.484 ********* 2025-08-29 15:25:21.121485 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:25:21.121496 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:25:21.121506 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:25:21.121516 | orchestrator | 2025-08-29 15:25:21.121526 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-08-29 15:25:21.121536 | orchestrator | Friday 29 August 2025 15:25:10 +0000 (0:00:02.429) 0:00:16.914 ********* 2025-08-29 15:25:21.121545 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.121555 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.121565 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.121575 | orchestrator | 2025-08-29 15:25:21.121584 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-08-29 15:25:21.121594 | orchestrator | Friday 29 August 2025 15:25:10 +0000 (0:00:00.318) 0:00:17.232 ********* 2025-08-29 15:25:21.121604 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.121614 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.121624 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.121633 | orchestrator | 2025-08-29 15:25:21.121643 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-08-29 15:25:21.121653 | orchestrator | Friday 29 August 2025 15:25:11 +0000 (0:00:00.627) 0:00:17.860 ********* 2025-08-29 15:25:21.121663 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:21.121673 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:21.121683 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:21.121692 | orchestrator | 2025-08-29 15:25:21.121702 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-08-29 15:25:21.121712 | orchestrator | Friday 29 August 2025 15:25:12 +0000 (0:00:00.560) 0:00:18.420 ********* 2025-08-29 15:25:21.121722 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.121732 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.121741 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.121751 | orchestrator | 2025-08-29 15:25:21.121761 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-08-29 15:25:21.121771 | orchestrator | Friday 29 August 2025 15:25:12 +0000 (0:00:00.283) 0:00:18.703 ********* 2025-08-29 15:25:21.121781 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:21.121793 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:21.121804 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:21.121815 | orchestrator | 2025-08-29 15:25:21.121825 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-08-29 15:25:21.121878 | orchestrator | Friday 29 August 2025 15:25:12 +0000 (0:00:00.261) 0:00:18.965 ********* 2025-08-29 15:25:21.121890 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:21.121901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:21.121912 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:21.121923 | orchestrator | 2025-08-29 15:25:21.121934 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 15:25:21.121945 | orchestrator | Friday 29 August 2025 15:25:12 +0000 (0:00:00.293) 0:00:19.259 ********* 2025-08-29 15:25:21.121957 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.121968 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.121979 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.121997 | orchestrator | 2025-08-29 15:25:21.122065 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-08-29 15:25:21.122077 | orchestrator | Friday 29 August 2025 15:25:13 +0000 (0:00:00.718) 0:00:19.977 ********* 2025-08-29 15:25:21.122088 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.122098 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.122109 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.122120 | orchestrator | 2025-08-29 15:25:21.122131 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-08-29 15:25:21.122142 | orchestrator | Friday 29 August 2025 15:25:14 +0000 (0:00:00.543) 0:00:20.520 ********* 2025-08-29 15:25:21.122153 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.122165 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.122176 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.122185 | orchestrator | 2025-08-29 15:25:21.122200 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-08-29 15:25:21.122210 | orchestrator | Friday 29 August 2025 15:25:14 +0000 (0:00:00.307) 0:00:20.827 ********* 2025-08-29 15:25:21.122220 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:21.122230 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:25:21.122240 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:25:21.122250 | orchestrator | 2025-08-29 15:25:21.122260 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-08-29 15:25:21.122270 | orchestrator | Friday 29 August 2025 15:25:14 +0000 (0:00:00.341) 0:00:21.169 ********* 2025-08-29 15:25:21.122280 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:25:21.122291 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:25:21.122301 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:25:21.122311 | orchestrator | 2025-08-29 15:25:21.122321 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 15:25:21.122331 | orchestrator | Friday 29 August 2025 15:25:15 +0000 (0:00:00.617) 0:00:21.787 ********* 2025-08-29 15:25:21.122341 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:21.122351 | orchestrator | 2025-08-29 15:25:21.122361 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 15:25:21.122371 | orchestrator | Friday 29 August 2025 15:25:15 +0000 (0:00:00.283) 0:00:22.070 ********* 2025-08-29 15:25:21.122381 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:25:21.122391 | orchestrator | 2025-08-29 15:25:21.122418 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 15:25:21.122428 | orchestrator | Friday 29 August 2025 15:25:16 +0000 (0:00:00.339) 0:00:22.409 ********* 2025-08-29 15:25:21.122439 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:21.122449 | orchestrator | 2025-08-29 15:25:21.122459 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 15:25:21.122469 | orchestrator | Friday 29 August 2025 15:25:17 +0000 (0:00:01.664) 0:00:24.074 ********* 2025-08-29 15:25:21.122479 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:21.122489 | orchestrator | 2025-08-29 15:25:21.122499 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 15:25:21.122509 | orchestrator | Friday 29 August 2025 15:25:18 +0000 (0:00:00.297) 0:00:24.371 ********* 2025-08-29 15:25:21.122519 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:21.122529 | orchestrator | 2025-08-29 15:25:21.122539 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:21.122548 | orchestrator | Friday 29 August 2025 15:25:18 +0000 (0:00:00.294) 0:00:24.666 ********* 2025-08-29 15:25:21.122559 | orchestrator | 2025-08-29 15:25:21.122569 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:21.122578 | orchestrator | Friday 29 August 2025 15:25:18 +0000 (0:00:00.074) 0:00:24.740 ********* 2025-08-29 15:25:21.122588 | orchestrator | 2025-08-29 15:25:21.122598 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 15:25:21.122608 | orchestrator | Friday 29 August 2025 15:25:18 +0000 (0:00:00.089) 0:00:24.829 ********* 2025-08-29 15:25:21.122624 | orchestrator | 2025-08-29 15:25:21.122635 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 15:25:21.122645 | orchestrator | Friday 29 August 2025 15:25:18 +0000 (0:00:00.072) 0:00:24.902 ********* 2025-08-29 15:25:21.122655 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 15:25:21.122665 | orchestrator | 2025-08-29 15:25:21.122675 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 15:25:21.122685 | orchestrator | Friday 29 August 2025 15:25:20 +0000 (0:00:01.586) 0:00:26.489 ********* 2025-08-29 15:25:21.122695 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-08-29 15:25:21.122705 | orchestrator |  "msg": [ 2025-08-29 15:25:21.122715 | orchestrator |  "Validator run completed.", 2025-08-29 15:25:21.122725 | orchestrator |  "You can find the report file here:", 2025-08-29 15:25:21.122735 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-08-29T15:24:54+00:00-report.json", 2025-08-29 15:25:21.122746 | orchestrator |  "on the following host:", 2025-08-29 15:25:21.122756 | orchestrator |  "testbed-manager" 2025-08-29 15:25:21.122766 | orchestrator |  ] 2025-08-29 15:25:21.122777 | orchestrator | } 2025-08-29 15:25:21.122787 | orchestrator | 2025-08-29 15:25:21.122797 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:25:21.122808 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-08-29 15:25:21.122820 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:25:21.122830 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:25:21.122840 | orchestrator | 2025-08-29 15:25:21.122850 | orchestrator | 2025-08-29 15:25:21.122860 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:25:21.122870 | orchestrator | Friday 29 August 2025 15:25:21 +0000 (0:00:00.951) 0:00:27.440 ********* 2025-08-29 15:25:21.122880 | orchestrator | =============================================================================== 2025-08-29 15:25:21.122889 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.43s 2025-08-29 15:25:21.122899 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.68s 2025-08-29 15:25:21.122909 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2025-08-29 15:25:21.122919 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2025-08-29 15:25:21.122933 | orchestrator | Create report output directory ------------------------------------------ 1.15s 2025-08-29 15:25:21.122943 | orchestrator | Print report file information ------------------------------------------- 0.95s 2025-08-29 15:25:21.122953 | orchestrator | Get timestamp for report file ------------------------------------------- 0.77s 2025-08-29 15:25:21.122963 | orchestrator | Prepare test data ------------------------------------------------------- 0.72s 2025-08-29 15:25:21.122973 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.64s 2025-08-29 15:25:21.122983 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.63s 2025-08-29 15:25:21.122993 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.62s 2025-08-29 15:25:21.123020 | orchestrator | Set test result to passed if count matches ------------------------------ 0.60s 2025-08-29 15:25:21.123030 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.59s 2025-08-29 15:25:21.123041 | orchestrator | Prepare test data ------------------------------------------------------- 0.58s 2025-08-29 15:25:21.123050 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2025-08-29 15:25:21.123066 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.56s 2025-08-29 15:25:21.123082 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.55s 2025-08-29 15:25:21.472232 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.54s 2025-08-29 15:25:21.472343 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2025-08-29 15:25:21.472364 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.36s 2025-08-29 15:25:21.981085 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-08-29 15:25:21.991671 | orchestrator | + set -e 2025-08-29 15:25:21.991721 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 15:25:21.991731 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 15:25:21.991741 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 15:25:21.991749 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 15:25:21.991758 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 15:25:21.991767 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 15:25:21.991778 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 15:25:21.991787 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:25:21.991796 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:25:21.991805 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 15:25:21.991814 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 15:25:21.991823 | orchestrator | ++ export ARA=false 2025-08-29 15:25:21.991832 | orchestrator | ++ ARA=false 2025-08-29 15:25:21.991842 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 15:25:21.991857 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 15:25:21.991873 | orchestrator | ++ export TEMPEST=false 2025-08-29 15:25:21.991889 | orchestrator | ++ TEMPEST=false 2025-08-29 15:25:21.991904 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 15:25:21.991918 | orchestrator | ++ IS_ZUUL=true 2025-08-29 15:25:21.991934 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 15:25:21.991951 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2025-08-29 15:25:21.991967 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 15:25:21.991981 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 15:25:21.991992 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 15:25:21.992001 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 15:25:21.992039 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 15:25:21.992048 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 15:25:21.992133 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 15:25:21.992146 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 15:25:21.992155 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 15:25:21.992164 | orchestrator | + source /etc/os-release 2025-08-29 15:25:21.992173 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-08-29 15:25:21.992182 | orchestrator | ++ NAME=Ubuntu 2025-08-29 15:25:21.992191 | orchestrator | ++ VERSION_ID=24.04 2025-08-29 15:25:21.992200 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-08-29 15:25:21.992209 | orchestrator | ++ VERSION_CODENAME=noble 2025-08-29 15:25:21.992218 | orchestrator | ++ ID=ubuntu 2025-08-29 15:25:21.992226 | orchestrator | ++ ID_LIKE=debian 2025-08-29 15:25:21.992235 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-08-29 15:25:21.992244 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-08-29 15:25:21.992253 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-08-29 15:25:21.992262 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-08-29 15:25:21.992272 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-08-29 15:25:21.992281 | orchestrator | ++ LOGO=ubuntu-logo 2025-08-29 15:25:21.992290 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-08-29 15:25:21.992300 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-08-29 15:25:21.992318 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 15:25:22.034438 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 15:25:48.664814 | orchestrator | 2025-08-29 15:25:48.664921 | orchestrator | # Status of Elasticsearch 2025-08-29 15:25:48.664939 | orchestrator | 2025-08-29 15:25:48.664951 | orchestrator | + pushd /opt/configuration/contrib 2025-08-29 15:25:48.664979 | orchestrator | + echo 2025-08-29 15:25:48.664991 | orchestrator | + echo '# Status of Elasticsearch' 2025-08-29 15:25:48.665002 | orchestrator | + echo 2025-08-29 15:25:48.665014 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-08-29 15:25:48.857603 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-08-29 15:25:48.857700 | orchestrator | 2025-08-29 15:25:48.857717 | orchestrator | # Status of MariaDB 2025-08-29 15:25:48.857730 | orchestrator | 2025-08-29 15:25:48.857742 | orchestrator | + echo 2025-08-29 15:25:48.857753 | orchestrator | + echo '# Status of MariaDB' 2025-08-29 15:25:48.857765 | orchestrator | + echo 2025-08-29 15:25:48.857776 | orchestrator | + MARIADB_USER=root_shard_0 2025-08-29 15:25:48.857788 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-08-29 15:25:48.911392 | orchestrator | Reading package lists... 2025-08-29 15:25:49.350676 | orchestrator | Building dependency tree... 2025-08-29 15:25:49.350946 | orchestrator | Reading state information... 2025-08-29 15:25:49.877611 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-08-29 15:25:49.877710 | orchestrator | bc set to manually installed. 2025-08-29 15:25:49.877726 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2025-08-29 15:25:50.616922 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-08-29 15:25:50.618470 | orchestrator | 2025-08-29 15:25:50.618493 | orchestrator | # Status of Prometheus 2025-08-29 15:25:50.618502 | orchestrator | 2025-08-29 15:25:50.618509 | orchestrator | + echo 2025-08-29 15:25:50.618516 | orchestrator | + echo '# Status of Prometheus' 2025-08-29 15:25:50.618523 | orchestrator | + echo 2025-08-29 15:25:50.618530 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-08-29 15:25:50.685433 | orchestrator | Unauthorized 2025-08-29 15:25:50.689325 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-08-29 15:25:50.774344 | orchestrator | Unauthorized 2025-08-29 15:25:50.779583 | orchestrator | 2025-08-29 15:25:50.779641 | orchestrator | # Status of RabbitMQ 2025-08-29 15:25:50.779655 | orchestrator | 2025-08-29 15:25:50.779667 | orchestrator | + echo 2025-08-29 15:25:50.779678 | orchestrator | + echo '# Status of RabbitMQ' 2025-08-29 15:25:50.779690 | orchestrator | + echo 2025-08-29 15:25:50.779703 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-08-29 15:25:51.344696 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-08-29 15:25:51.357484 | orchestrator | 2025-08-29 15:25:51.357565 | orchestrator | # Status of Redis 2025-08-29 15:25:51.357580 | orchestrator | 2025-08-29 15:25:51.357592 | orchestrator | + echo 2025-08-29 15:25:51.357603 | orchestrator | + echo '# Status of Redis' 2025-08-29 15:25:51.357615 | orchestrator | + echo 2025-08-29 15:25:51.357628 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-08-29 15:25:51.365941 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002456s;;;0.000000;10.000000 2025-08-29 15:25:51.366573 | orchestrator | 2025-08-29 15:25:51.366608 | orchestrator | # Create backup of MariaDB database 2025-08-29 15:25:51.366620 | orchestrator | 2025-08-29 15:25:51.366631 | orchestrator | + popd 2025-08-29 15:25:51.366642 | orchestrator | + echo 2025-08-29 15:25:51.366652 | orchestrator | + echo '# Create backup of MariaDB database' 2025-08-29 15:25:51.366662 | orchestrator | + echo 2025-08-29 15:25:51.366673 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-08-29 15:25:53.514876 | orchestrator | 2025-08-29 15:25:53 | INFO  | Task edd6077a-ad3c-49a7-ac85-afdf7d37529c (mariadb_backup) was prepared for execution. 2025-08-29 15:25:53.515156 | orchestrator | 2025-08-29 15:25:53 | INFO  | It takes a moment until task edd6077a-ad3c-49a7-ac85-afdf7d37529c (mariadb_backup) has been started and output is visible here. 2025-08-29 15:26:23.817973 | orchestrator | 2025-08-29 15:26:23.818153 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:26:23.818171 | orchestrator | 2025-08-29 15:26:23.818184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:26:23.818196 | orchestrator | Friday 29 August 2025 15:25:57 +0000 (0:00:00.224) 0:00:00.224 ********* 2025-08-29 15:26:23.818231 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:23.818244 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:23.818255 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:23.818266 | orchestrator | 2025-08-29 15:26:23.818277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:26:23.818288 | orchestrator | Friday 29 August 2025 15:25:58 +0000 (0:00:00.343) 0:00:00.568 ********* 2025-08-29 15:26:23.818300 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 15:26:23.818311 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 15:26:23.818322 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 15:26:23.818333 | orchestrator | 2025-08-29 15:26:23.818344 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 15:26:23.818355 | orchestrator | 2025-08-29 15:26:23.818366 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 15:26:23.818377 | orchestrator | Friday 29 August 2025 15:25:59 +0000 (0:00:00.700) 0:00:01.268 ********* 2025-08-29 15:26:23.818388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:26:23.818400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:26:23.818410 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:26:23.818421 | orchestrator | 2025-08-29 15:26:23.818432 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:26:23.818444 | orchestrator | Friday 29 August 2025 15:25:59 +0000 (0:00:00.402) 0:00:01.671 ********* 2025-08-29 15:26:23.818456 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:26:23.818470 | orchestrator | 2025-08-29 15:26:23.818483 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-08-29 15:26:23.818496 | orchestrator | Friday 29 August 2025 15:26:00 +0000 (0:00:00.584) 0:00:02.255 ********* 2025-08-29 15:26:23.818515 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:26:23.818534 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:26:23.818553 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:26:23.818570 | orchestrator | 2025-08-29 15:26:23.818589 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-08-29 15:26:23.818607 | orchestrator | Friday 29 August 2025 15:26:03 +0000 (0:00:03.604) 0:00:05.860 ********* 2025-08-29 15:26:23.818625 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 15:26:23.818644 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-08-29 15:26:23.818665 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 15:26:23.818685 | orchestrator | mariadb_bootstrap_restart 2025-08-29 15:26:23.818705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:26:23.818726 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:26:23.818746 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:26:23.818764 | orchestrator | 2025-08-29 15:26:23.818776 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 15:26:23.818790 | orchestrator | skipping: no hosts matched 2025-08-29 15:26:23.818802 | orchestrator | 2025-08-29 15:26:23.818815 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:26:23.818826 | orchestrator | skipping: no hosts matched 2025-08-29 15:26:23.818838 | orchestrator | 2025-08-29 15:26:23.818849 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 15:26:23.818860 | orchestrator | skipping: no hosts matched 2025-08-29 15:26:23.818872 | orchestrator | 2025-08-29 15:26:23.818883 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 15:26:23.818895 | orchestrator | 2025-08-29 15:26:23.818906 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 15:26:23.818917 | orchestrator | Friday 29 August 2025 15:26:22 +0000 (0:00:18.996) 0:00:24.856 ********* 2025-08-29 15:26:23.818928 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:23.818950 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:26:23.818962 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:26:23.818973 | orchestrator | 2025-08-29 15:26:23.818984 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 15:26:23.818995 | orchestrator | Friday 29 August 2025 15:26:22 +0000 (0:00:00.328) 0:00:25.185 ********* 2025-08-29 15:26:23.819007 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:26:23.819018 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:26:23.819029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:26:23.819040 | orchestrator | 2025-08-29 15:26:23.819051 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:26:23.819064 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:26:23.819076 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:26:23.819087 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:26:23.819099 | orchestrator | 2025-08-29 15:26:23.819110 | orchestrator | 2025-08-29 15:26:23.819196 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:26:23.819212 | orchestrator | Friday 29 August 2025 15:26:23 +0000 (0:00:00.488) 0:00:25.673 ********* 2025-08-29 15:26:23.819224 | orchestrator | =============================================================================== 2025-08-29 15:26:23.819235 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 19.00s 2025-08-29 15:26:23.819265 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.60s 2025-08-29 15:26:23.819277 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-08-29 15:26:23.819288 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-08-29 15:26:23.819300 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.49s 2025-08-29 15:26:23.819311 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-08-29 15:26:23.819322 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-08-29 15:26:23.819333 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2025-08-29 15:26:24.265084 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-08-29 15:26:24.276691 | orchestrator | + set -e 2025-08-29 15:26:24.276747 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 15:26:24.276761 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 15:26:24.277330 | orchestrator | ++ INTERACTIVE=false 2025-08-29 15:26:24.277361 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 15:26:24.277372 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 15:26:24.277384 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 15:26:24.278796 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 15:26:24.286185 | orchestrator | 2025-08-29 15:26:24.286237 | orchestrator | # OpenStack endpoints 2025-08-29 15:26:24.286265 | orchestrator | 2025-08-29 15:26:24.286277 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 15:26:24.286288 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 15:26:24.286311 | orchestrator | + export OS_CLOUD=admin 2025-08-29 15:26:24.286322 | orchestrator | + OS_CLOUD=admin 2025-08-29 15:26:24.286334 | orchestrator | + echo 2025-08-29 15:26:24.286345 | orchestrator | + echo '# OpenStack endpoints' 2025-08-29 15:26:24.286356 | orchestrator | + echo 2025-08-29 15:26:24.286368 | orchestrator | + openstack endpoint list 2025-08-29 15:26:27.764704 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 15:26:27.764815 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-08-29 15:26:27.764878 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 15:26:27.764900 | orchestrator | | 01a70a43de1c4e5da65abd557c445112 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-08-29 15:26:27.764918 | orchestrator | | 051233e5ecd2498abfd3f8a89f72ef04 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 15:26:27.764957 | orchestrator | | 119aa9d62fe44a56890a0772768aa22d | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-08-29 15:26:27.764976 | orchestrator | | 1bd37421748c420583522f1c21890c3b | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-08-29 15:26:27.764994 | orchestrator | | 2d8b7fc14df349c8b95cd4b274772c64 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-08-29 15:26:27.765014 | orchestrator | | 4ca11aaabb1946ec928b72de6aa0790c | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-08-29 15:26:27.765032 | orchestrator | | 5f0953a41b5345fea2d62270fc19b700 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 15:26:27.765051 | orchestrator | | 7cfc025c60d54e1eb820b4dc6d595582 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-08-29 15:26:27.765062 | orchestrator | | 8b702b59d06941ee8eb3086b9b3faab1 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-08-29 15:26:27.765073 | orchestrator | | 8fa835c527654e81a6bd98f793f45e4f | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-08-29 15:26:27.765084 | orchestrator | | 92c7d23828ac4a46a831305bd7f5f5ec | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 15:26:27.765095 | orchestrator | | 971794576b16498fbff5e9e403cc22e2 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-08-29 15:26:27.765106 | orchestrator | | afaa7371f88e49babde5b3dec02c1c57 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-08-29 15:26:27.765117 | orchestrator | | b050c9c993c94b678ee291cad493f457 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 15:26:27.765160 | orchestrator | | bb0f8913ed184549801d1cb4f017b9b4 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-08-29 15:26:27.765174 | orchestrator | | cc38013f62064252a1cb94f38865c877 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-08-29 15:26:27.765186 | orchestrator | | cf32fec8f6f74232af128f52d2e7efd9 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-08-29 15:26:27.765197 | orchestrator | | d266eeed8d754e1788669b23575bf542 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-08-29 15:26:27.765209 | orchestrator | | e59316aa7d4a4086b1c2b5eaa49dbce5 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-08-29 15:26:27.765220 | orchestrator | | e679eee5a019493c947fcc6dfd1f859f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-08-29 15:26:27.765259 | orchestrator | | ee0f71b1f885400bb045ac94f9168c40 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-08-29 15:26:27.765271 | orchestrator | | f762628b09d145ceaed65d06e8c02a56 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-08-29 15:26:27.765282 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 15:26:28.071470 | orchestrator | 2025-08-29 15:26:28.071564 | orchestrator | # Cinder 2025-08-29 15:26:28.071579 | orchestrator | 2025-08-29 15:26:28.071591 | orchestrator | + echo 2025-08-29 15:26:28.071603 | orchestrator | + echo '# Cinder' 2025-08-29 15:26:28.071615 | orchestrator | + echo 2025-08-29 15:26:28.071627 | orchestrator | + openstack volume service list 2025-08-29 15:26:30.992388 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 15:26:30.992493 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 15:26:30.992509 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 15:26:30.992521 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T15:26:29.000000 | 2025-08-29 15:26:30.992532 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T15:26:21.000000 | 2025-08-29 15:26:30.992544 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T15:26:21.000000 | 2025-08-29 15:26:30.992555 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-08-29T15:26:22.000000 | 2025-08-29 15:26:30.992566 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-08-29T15:26:25.000000 | 2025-08-29 15:26:30.992577 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-08-29T15:26:25.000000 | 2025-08-29 15:26:30.992588 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-08-29T15:26:21.000000 | 2025-08-29 15:26:30.992600 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-08-29T15:26:21.000000 | 2025-08-29 15:26:30.992611 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-08-29T15:26:23.000000 | 2025-08-29 15:26:30.992639 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 15:26:31.337853 | orchestrator | 2025-08-29 15:26:31.337957 | orchestrator | # Neutron 2025-08-29 15:26:31.337981 | orchestrator | 2025-08-29 15:26:31.338000 | orchestrator | + echo 2025-08-29 15:26:31.338082 | orchestrator | + echo '# Neutron' 2025-08-29 15:26:31.338110 | orchestrator | + echo 2025-08-29 15:26:31.338128 | orchestrator | + openstack network agent list 2025-08-29 15:26:34.326472 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 15:26:34.326581 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-08-29 15:26:34.326596 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 15:26:34.326608 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-08-29 15:26:34.326619 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-08-29 15:26:34.326630 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-08-29 15:26:34.326670 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-08-29 15:26:34.326682 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-08-29 15:26:34.326693 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-08-29 15:26:34.326704 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 15:26:34.326715 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 15:26:34.326726 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 15:26:34.326737 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 15:26:34.621775 | orchestrator | + openstack network service provider list 2025-08-29 15:26:37.393211 | orchestrator | +---------------+------+---------+ 2025-08-29 15:26:37.393307 | orchestrator | | Service Type | Name | Default | 2025-08-29 15:26:37.393321 | orchestrator | +---------------+------+---------+ 2025-08-29 15:26:37.393332 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-08-29 15:26:37.393344 | orchestrator | +---------------+------+---------+ 2025-08-29 15:26:37.680789 | orchestrator | 2025-08-29 15:26:37.680875 | orchestrator | # Nova 2025-08-29 15:26:37.680890 | orchestrator | 2025-08-29 15:26:37.680901 | orchestrator | + echo 2025-08-29 15:26:37.680913 | orchestrator | + echo '# Nova' 2025-08-29 15:26:37.680924 | orchestrator | + echo 2025-08-29 15:26:37.680936 | orchestrator | + openstack compute service list 2025-08-29 15:26:41.036657 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 15:26:41.036776 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 15:26:41.036796 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 15:26:41.036814 | orchestrator | | c9d83bd3-9121-4170-bacb-8ec4c5f78b3e | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T15:26:37.000000 | 2025-08-29 15:26:41.036832 | orchestrator | | 991773a8-8260-4ded-b04e-1a57a97f9be7 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T15:26:37.000000 | 2025-08-29 15:26:41.036849 | orchestrator | | 05893c09-9697-417f-94ed-14616b3737e3 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T15:26:30.000000 | 2025-08-29 15:26:41.036866 | orchestrator | | 431b7a51-67d5-48e6-9d43-c153d156cb48 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-08-29T15:26:38.000000 | 2025-08-29 15:26:41.036883 | orchestrator | | 85c4d7d4-7dda-4f0e-8963-2082fcba50d8 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-08-29T15:26:38.000000 | 2025-08-29 15:26:41.036900 | orchestrator | | 53c4ac0d-1df8-4637-b248-2d3d38423dbc | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-08-29T15:26:38.000000 | 2025-08-29 15:26:41.036918 | orchestrator | | 51b1a0ff-bd41-4b03-ab08-c4521adb79a8 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-08-29T15:26:35.000000 | 2025-08-29 15:26:41.036934 | orchestrator | | 1f9b9d79-5335-4b40-b365-b09ddd7a1745 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-08-29T15:26:36.000000 | 2025-08-29 15:26:41.036952 | orchestrator | | 6d9d3a0b-f011-4265-9c26-c6a9f5461bf8 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-08-29T15:26:36.000000 | 2025-08-29 15:26:41.036992 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 15:26:41.337099 | orchestrator | + openstack hypervisor list 2025-08-29 15:26:45.758227 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 15:26:45.758335 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-08-29 15:26:45.758349 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 15:26:45.758360 | orchestrator | | 2660c8b6-d1eb-4f78-8952-6f3968d8cf0f | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-08-29 15:26:45.758370 | orchestrator | | 9e90988e-5873-4a08-80c3-004ef6409b17 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-08-29 15:26:45.758380 | orchestrator | | 94d61982-47e0-43fc-802b-a63676dd0f13 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-08-29 15:26:45.758390 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 15:26:46.089335 | orchestrator | 2025-08-29 15:26:46.089457 | orchestrator | # Run OpenStack test play 2025-08-29 15:26:46.089484 | orchestrator | 2025-08-29 15:26:46.089505 | orchestrator | + echo 2025-08-29 15:26:46.089524 | orchestrator | + echo '# Run OpenStack test play' 2025-08-29 15:26:46.089547 | orchestrator | + echo 2025-08-29 15:26:46.089567 | orchestrator | + osism apply --environment openstack test 2025-08-29 15:26:47.988543 | orchestrator | 2025-08-29 15:26:47 | INFO  | Trying to run play test in environment openstack 2025-08-29 15:26:58.260973 | orchestrator | 2025-08-29 15:26:58 | INFO  | Task 826af366-20a9-4586-a8e1-74a6dc1d8be5 (test) was prepared for execution. 2025-08-29 15:26:58.262681 | orchestrator | 2025-08-29 15:26:58 | INFO  | It takes a moment until task 826af366-20a9-4586-a8e1-74a6dc1d8be5 (test) has been started and output is visible here. 2025-08-29 15:32:54.345852 | orchestrator | 2025-08-29 15:32:54.345957 | orchestrator | PLAY [Create test project] ***************************************************** 2025-08-29 15:32:54.345972 | orchestrator | 2025-08-29 15:32:54.345983 | orchestrator | TASK [Create test domain] ****************************************************** 2025-08-29 15:32:54.345994 | orchestrator | Friday 29 August 2025 15:27:02 +0000 (0:00:00.089) 0:00:00.089 ********* 2025-08-29 15:32:54.346005 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346066 | orchestrator | 2025-08-29 15:32:54.346080 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-08-29 15:32:54.346090 | orchestrator | Friday 29 August 2025 15:27:06 +0000 (0:00:03.743) 0:00:03.833 ********* 2025-08-29 15:32:54.346101 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346111 | orchestrator | 2025-08-29 15:32:54.346121 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-08-29 15:32:54.346132 | orchestrator | Friday 29 August 2025 15:27:10 +0000 (0:00:04.164) 0:00:07.997 ********* 2025-08-29 15:32:54.346142 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346152 | orchestrator | 2025-08-29 15:32:54.346162 | orchestrator | TASK [Create test project] ***************************************************** 2025-08-29 15:32:54.346172 | orchestrator | Friday 29 August 2025 15:27:16 +0000 (0:00:06.455) 0:00:14.453 ********* 2025-08-29 15:32:54.346182 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346192 | orchestrator | 2025-08-29 15:32:54.346202 | orchestrator | TASK [Create test user] ******************************************************** 2025-08-29 15:32:54.346212 | orchestrator | Friday 29 August 2025 15:27:20 +0000 (0:00:03.962) 0:00:18.415 ********* 2025-08-29 15:32:54.346222 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346232 | orchestrator | 2025-08-29 15:32:54.346242 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-08-29 15:32:54.346252 | orchestrator | Friday 29 August 2025 15:27:24 +0000 (0:00:04.175) 0:00:22.591 ********* 2025-08-29 15:32:54.346262 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-08-29 15:32:54.346273 | orchestrator | changed: [localhost] => (item=member) 2025-08-29 15:32:54.346284 | orchestrator | changed: [localhost] => (item=creator) 2025-08-29 15:32:54.346316 | orchestrator | 2025-08-29 15:32:54.346327 | orchestrator | TASK [Create test server group] ************************************************ 2025-08-29 15:32:54.346337 | orchestrator | Friday 29 August 2025 15:27:36 +0000 (0:00:12.009) 0:00:34.600 ********* 2025-08-29 15:32:54.346347 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346357 | orchestrator | 2025-08-29 15:32:54.346368 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-08-29 15:32:54.346378 | orchestrator | Friday 29 August 2025 15:27:41 +0000 (0:00:04.583) 0:00:39.184 ********* 2025-08-29 15:32:54.346388 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346398 | orchestrator | 2025-08-29 15:32:54.346409 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-08-29 15:32:54.346420 | orchestrator | Friday 29 August 2025 15:27:46 +0000 (0:00:04.677) 0:00:43.861 ********* 2025-08-29 15:32:54.346431 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346442 | orchestrator | 2025-08-29 15:32:54.346454 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-08-29 15:32:54.346465 | orchestrator | Friday 29 August 2025 15:27:50 +0000 (0:00:04.173) 0:00:48.035 ********* 2025-08-29 15:32:54.346476 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346487 | orchestrator | 2025-08-29 15:32:54.346498 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-08-29 15:32:54.346509 | orchestrator | Friday 29 August 2025 15:27:54 +0000 (0:00:03.852) 0:00:51.887 ********* 2025-08-29 15:32:54.346519 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346529 | orchestrator | 2025-08-29 15:32:54.346539 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-08-29 15:32:54.346549 | orchestrator | Friday 29 August 2025 15:27:58 +0000 (0:00:04.072) 0:00:55.960 ********* 2025-08-29 15:32:54.346558 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346568 | orchestrator | 2025-08-29 15:32:54.346578 | orchestrator | TASK [Create test network topology] ******************************************** 2025-08-29 15:32:54.346602 | orchestrator | Friday 29 August 2025 15:28:02 +0000 (0:00:03.821) 0:00:59.782 ********* 2025-08-29 15:32:54.346612 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.346622 | orchestrator | 2025-08-29 15:32:54.346632 | orchestrator | TASK [Create test instances] *************************************************** 2025-08-29 15:32:54.346642 | orchestrator | Friday 29 August 2025 15:28:17 +0000 (0:00:15.095) 0:01:14.877 ********* 2025-08-29 15:32:54.346652 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 15:32:54.346662 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 15:32:54.346672 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 15:32:54.346682 | orchestrator | 2025-08-29 15:32:54.346692 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 15:32:54.346702 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 15:32:54.346712 | orchestrator | 2025-08-29 15:32:54.346779 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 15:32:54.346790 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 15:32:54.346800 | orchestrator | 2025-08-29 15:32:54.346810 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-08-29 15:32:54.346819 | orchestrator | Friday 29 August 2025 15:31:28 +0000 (0:03:11.140) 0:04:26.017 ********* 2025-08-29 15:32:54.346829 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 15:32:54.346839 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 15:32:54.346849 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 15:32:54.346859 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 15:32:54.346869 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 15:32:54.346879 | orchestrator | 2025-08-29 15:32:54.346889 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-08-29 15:32:54.346899 | orchestrator | Friday 29 August 2025 15:31:52 +0000 (0:00:24.583) 0:04:50.601 ********* 2025-08-29 15:32:54.346909 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 15:32:54.346919 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 15:32:54.346936 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 15:32:54.346947 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 15:32:54.346973 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 15:32:54.346984 | orchestrator | 2025-08-29 15:32:54.346997 | orchestrator | TASK [Create test volume] ****************************************************** 2025-08-29 15:32:54.347008 | orchestrator | Friday 29 August 2025 15:32:28 +0000 (0:00:35.291) 0:05:25.892 ********* 2025-08-29 15:32:54.347018 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.347028 | orchestrator | 2025-08-29 15:32:54.347038 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-08-29 15:32:54.347048 | orchestrator | Friday 29 August 2025 15:32:35 +0000 (0:00:06.896) 0:05:32.789 ********* 2025-08-29 15:32:54.347058 | orchestrator | changed: [localhost] 2025-08-29 15:32:54.347067 | orchestrator | 2025-08-29 15:32:54.347077 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-08-29 15:32:54.347087 | orchestrator | Friday 29 August 2025 15:32:48 +0000 (0:00:13.527) 0:05:46.316 ********* 2025-08-29 15:32:54.347098 | orchestrator | ok: [localhost] 2025-08-29 15:32:54.347108 | orchestrator | 2025-08-29 15:32:54.347118 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-08-29 15:32:54.347128 | orchestrator | Friday 29 August 2025 15:32:54 +0000 (0:00:05.510) 0:05:51.827 ********* 2025-08-29 15:32:54.347138 | orchestrator | ok: [localhost] => { 2025-08-29 15:32:54.347148 | orchestrator |  "msg": "192.168.112.183" 2025-08-29 15:32:54.347158 | orchestrator | } 2025-08-29 15:32:54.347169 | orchestrator | 2025-08-29 15:32:54.347178 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:32:54.347189 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:32:54.347200 | orchestrator | 2025-08-29 15:32:54.347210 | orchestrator | 2025-08-29 15:32:54.347220 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:32:54.347230 | orchestrator | Friday 29 August 2025 15:32:54 +0000 (0:00:00.038) 0:05:51.866 ********* 2025-08-29 15:32:54.347240 | orchestrator | =============================================================================== 2025-08-29 15:32:54.347250 | orchestrator | Create test instances ------------------------------------------------- 191.14s 2025-08-29 15:32:54.347260 | orchestrator | Add tag to instances --------------------------------------------------- 35.29s 2025-08-29 15:32:54.347270 | orchestrator | Add metadata to instances ---------------------------------------------- 24.58s 2025-08-29 15:32:54.347280 | orchestrator | Create test network topology ------------------------------------------- 15.10s 2025-08-29 15:32:54.347289 | orchestrator | Attach test volume ----------------------------------------------------- 13.53s 2025-08-29 15:32:54.347299 | orchestrator | Add member roles to user test ------------------------------------------ 12.01s 2025-08-29 15:32:54.347309 | orchestrator | Create test volume ------------------------------------------------------ 6.90s 2025-08-29 15:32:54.347319 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.46s 2025-08-29 15:32:54.347329 | orchestrator | Create floating ip address ---------------------------------------------- 5.51s 2025-08-29 15:32:54.347339 | orchestrator | Create ssh security group ----------------------------------------------- 4.68s 2025-08-29 15:32:54.347349 | orchestrator | Create test server group ------------------------------------------------ 4.58s 2025-08-29 15:32:54.347359 | orchestrator | Create test user -------------------------------------------------------- 4.18s 2025-08-29 15:32:54.347369 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.17s 2025-08-29 15:32:54.347379 | orchestrator | Create test-admin user -------------------------------------------------- 4.16s 2025-08-29 15:32:54.347389 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.07s 2025-08-29 15:32:54.347399 | orchestrator | Create test project ----------------------------------------------------- 3.96s 2025-08-29 15:32:54.347415 | orchestrator | Create icmp security group ---------------------------------------------- 3.85s 2025-08-29 15:32:54.347425 | orchestrator | Create test keypair ----------------------------------------------------- 3.82s 2025-08-29 15:32:54.347435 | orchestrator | Create test domain ------------------------------------------------------ 3.74s 2025-08-29 15:32:54.347445 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-08-29 15:32:54.648092 | orchestrator | + server_list 2025-08-29 15:32:54.648181 | orchestrator | + openstack --os-cloud test server list 2025-08-29 15:32:58.528859 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 15:32:58.528960 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-08-29 15:32:58.528976 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 15:32:58.528988 | orchestrator | | 5f1d4d9d-b258-4e6c-8a35-1ecefae5ecfc | test-4 | ACTIVE | auto_allocated_network=10.42.0.44, 192.168.112.104 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:32:58.528999 | orchestrator | | d9e14c9b-e32c-426b-ada8-6de91923f972 | test-3 | ACTIVE | auto_allocated_network=10.42.0.56, 192.168.112.167 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:32:58.529010 | orchestrator | | 7148c885-4870-4721-b8cd-dc2712fa5eca | test-2 | ACTIVE | auto_allocated_network=10.42.0.45, 192.168.112.113 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:32:58.529022 | orchestrator | | b9c5bde1-f895-43b8-b215-000f766e9e63 | test-1 | ACTIVE | auto_allocated_network=10.42.0.19, 192.168.112.102 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:32:58.529055 | orchestrator | | 1e8cbd42-e090-41a6-a641-8dda90b6a7e1 | test | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.183 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 15:32:58.529067 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 15:32:58.794917 | orchestrator | + openstack --os-cloud test server show test 2025-08-29 15:33:02.141191 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:02.141287 | orchestrator | | Field | Value | 2025-08-29 15:33:02.141303 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:02.141315 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:33:02.141327 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:33:02.141339 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:33:02.141370 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-08-29 15:33:02.141389 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:33:02.141402 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:33:02.141413 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:33:02.141424 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:33:02.141452 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:33:02.141465 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:33:02.141476 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:33:02.141487 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:33:02.141499 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:33:02.141518 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:33:02.141529 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:33:02.141545 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:28:46.000000 | 2025-08-29 15:33:02.141557 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:33:02.141569 | orchestrator | | accessIPv4 | | 2025-08-29 15:33:02.141580 | orchestrator | | accessIPv6 | | 2025-08-29 15:33:02.141592 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.183 | 2025-08-29 15:33:02.141610 | orchestrator | | config_drive | | 2025-08-29 15:33:02.141622 | orchestrator | | created | 2025-08-29T15:28:25Z | 2025-08-29 15:33:02.141633 | orchestrator | | description | None | 2025-08-29 15:33:02.141644 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:33:02.141662 | orchestrator | | hostId | a79b14b596b403368d67f7f141e6256071557ad54066033be23c3604 | 2025-08-29 15:33:02.141673 | orchestrator | | host_status | None | 2025-08-29 15:33:02.141685 | orchestrator | | id | 1e8cbd42-e090-41a6-a641-8dda90b6a7e1 | 2025-08-29 15:33:02.141701 | orchestrator | | image | Cirros 0.6.2 (4103d791-424a-4644-af98-1610d1f7142b) | 2025-08-29 15:33:02.141715 | orchestrator | | key_name | test | 2025-08-29 15:33:02.141756 | orchestrator | | locked | False | 2025-08-29 15:33:02.141769 | orchestrator | | locked_reason | None | 2025-08-29 15:33:02.141782 | orchestrator | | name | test | 2025-08-29 15:33:02.141802 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:33:02.141817 | orchestrator | | progress | 0 | 2025-08-29 15:33:02.141830 | orchestrator | | project_id | 6a7e56d363bd45448a3c686437135859 | 2025-08-29 15:33:02.141850 | orchestrator | | properties | hostname='test' | 2025-08-29 15:33:02.141864 | orchestrator | | security_groups | name='ssh' | 2025-08-29 15:33:02.141878 | orchestrator | | | name='icmp' | 2025-08-29 15:33:02.141891 | orchestrator | | server_groups | None | 2025-08-29 15:33:02.141907 | orchestrator | | status | ACTIVE | 2025-08-29 15:33:02.141919 | orchestrator | | tags | test | 2025-08-29 15:33:02.141931 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:33:02.141942 | orchestrator | | updated | 2025-08-29T15:31:33Z | 2025-08-29 15:33:02.141959 | orchestrator | | user_id | efd55a9847274ffb99b47029718718dd | 2025-08-29 15:33:02.141971 | orchestrator | | volumes_attached | delete_on_termination='False', id='041144a2-f0f8-47af-91b3-8c4bef7c038f' | 2025-08-29 15:33:02.145157 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:02.437919 | orchestrator | + openstack --os-cloud test server show test-1 2025-08-29 15:33:05.719798 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:05.719898 | orchestrator | | Field | Value | 2025-08-29 15:33:05.719913 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:05.719925 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:33:05.719937 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:33:05.719949 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:33:05.719960 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-08-29 15:33:05.719972 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:33:05.719984 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:33:05.720061 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:33:05.720107 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:33:05.720140 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:33:05.720153 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:33:05.720164 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:33:05.720176 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:33:05.720187 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:33:05.720205 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:33:05.720216 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:33:05.720228 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:29:28.000000 | 2025-08-29 15:33:05.720255 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:33:05.720280 | orchestrator | | accessIPv4 | | 2025-08-29 15:33:05.720302 | orchestrator | | accessIPv6 | | 2025-08-29 15:33:05.720316 | orchestrator | | addresses | auto_allocated_network=10.42.0.19, 192.168.112.102 | 2025-08-29 15:33:05.720336 | orchestrator | | config_drive | | 2025-08-29 15:33:05.720350 | orchestrator | | created | 2025-08-29T15:29:08Z | 2025-08-29 15:33:05.720364 | orchestrator | | description | None | 2025-08-29 15:33:05.720377 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:33:05.720390 | orchestrator | | hostId | 68bc2bbe6a576864266da2a871820b45cf7eaed0b42b939f73634ca5 | 2025-08-29 15:33:05.720408 | orchestrator | | host_status | None | 2025-08-29 15:33:05.720422 | orchestrator | | id | b9c5bde1-f895-43b8-b215-000f766e9e63 | 2025-08-29 15:33:05.720435 | orchestrator | | image | Cirros 0.6.2 (4103d791-424a-4644-af98-1610d1f7142b) | 2025-08-29 15:33:05.720451 | orchestrator | | key_name | test | 2025-08-29 15:33:05.720480 | orchestrator | | locked | False | 2025-08-29 15:33:05.720500 | orchestrator | | locked_reason | None | 2025-08-29 15:33:05.720521 | orchestrator | | name | test-1 | 2025-08-29 15:33:05.720543 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:33:05.720557 | orchestrator | | progress | 0 | 2025-08-29 15:33:05.720575 | orchestrator | | project_id | 6a7e56d363bd45448a3c686437135859 | 2025-08-29 15:33:05.720593 | orchestrator | | properties | hostname='test-1' | 2025-08-29 15:33:05.720616 | orchestrator | | security_groups | name='ssh' | 2025-08-29 15:33:05.720648 | orchestrator | | | name='icmp' | 2025-08-29 15:33:05.720666 | orchestrator | | server_groups | None | 2025-08-29 15:33:05.720684 | orchestrator | | status | ACTIVE | 2025-08-29 15:33:05.720721 | orchestrator | | tags | test | 2025-08-29 15:33:05.720771 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:33:05.720790 | orchestrator | | updated | 2025-08-29T15:31:38Z | 2025-08-29 15:33:05.720817 | orchestrator | | user_id | efd55a9847274ffb99b47029718718dd | 2025-08-29 15:33:05.720837 | orchestrator | | volumes_attached | | 2025-08-29 15:33:05.725148 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:06.055921 | orchestrator | + openstack --os-cloud test server show test-2 2025-08-29 15:33:09.283268 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:09.283360 | orchestrator | | Field | Value | 2025-08-29 15:33:09.283393 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:09.283406 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:33:09.283441 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:33:09.283454 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:33:09.283466 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-08-29 15:33:09.283478 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:33:09.283490 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:33:09.283502 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:33:09.283514 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:33:09.283544 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:33:09.283557 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:33:09.283574 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:33:09.283587 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:33:09.283616 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:33:09.283636 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:33:09.283655 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:33:09.283674 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:30:08.000000 | 2025-08-29 15:33:09.283692 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:33:09.283710 | orchestrator | | accessIPv4 | | 2025-08-29 15:33:09.283727 | orchestrator | | accessIPv6 | | 2025-08-29 15:33:09.283777 | orchestrator | | addresses | auto_allocated_network=10.42.0.45, 192.168.112.113 | 2025-08-29 15:33:09.283809 | orchestrator | | config_drive | | 2025-08-29 15:33:09.283830 | orchestrator | | created | 2025-08-29T15:29:47Z | 2025-08-29 15:33:09.283849 | orchestrator | | description | None | 2025-08-29 15:33:09.283870 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:33:09.283891 | orchestrator | | hostId | a8acf7737c38290e39eb4039c80193b02c839388fb028cfb50b29b3c | 2025-08-29 15:33:09.283903 | orchestrator | | host_status | None | 2025-08-29 15:33:09.283914 | orchestrator | | id | 7148c885-4870-4721-b8cd-dc2712fa5eca | 2025-08-29 15:33:09.283926 | orchestrator | | image | Cirros 0.6.2 (4103d791-424a-4644-af98-1610d1f7142b) | 2025-08-29 15:33:09.283937 | orchestrator | | key_name | test | 2025-08-29 15:33:09.283949 | orchestrator | | locked | False | 2025-08-29 15:33:09.283960 | orchestrator | | locked_reason | None | 2025-08-29 15:33:09.283971 | orchestrator | | name | test-2 | 2025-08-29 15:33:09.283990 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:33:09.284002 | orchestrator | | progress | 0 | 2025-08-29 15:33:09.284025 | orchestrator | | project_id | 6a7e56d363bd45448a3c686437135859 | 2025-08-29 15:33:09.284037 | orchestrator | | properties | hostname='test-2' | 2025-08-29 15:33:09.284048 | orchestrator | | security_groups | name='ssh' | 2025-08-29 15:33:09.284061 | orchestrator | | | name='icmp' | 2025-08-29 15:33:09.284072 | orchestrator | | server_groups | None | 2025-08-29 15:33:09.284084 | orchestrator | | status | ACTIVE | 2025-08-29 15:33:09.284096 | orchestrator | | tags | test | 2025-08-29 15:33:09.284107 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:33:09.284119 | orchestrator | | updated | 2025-08-29T15:31:43Z | 2025-08-29 15:33:09.284135 | orchestrator | | user_id | efd55a9847274ffb99b47029718718dd | 2025-08-29 15:33:09.284154 | orchestrator | | volumes_attached | | 2025-08-29 15:33:09.289513 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:09.554822 | orchestrator | + openstack --os-cloud test server show test-3 2025-08-29 15:33:12.717283 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:12.717401 | orchestrator | | Field | Value | 2025-08-29 15:33:12.717431 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:12.717452 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:33:12.717474 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:33:12.717494 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:33:12.717510 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-08-29 15:33:12.717522 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:33:12.717534 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:33:12.717568 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:33:12.717580 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:33:12.717623 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:33:12.717636 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:33:12.717648 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:33:12.717660 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:33:12.717671 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:33:12.717683 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:33:12.717694 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:33:12.717705 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:30:42.000000 | 2025-08-29 15:33:12.717717 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:33:12.717779 | orchestrator | | accessIPv4 | | 2025-08-29 15:33:12.717794 | orchestrator | | accessIPv6 | | 2025-08-29 15:33:12.717805 | orchestrator | | addresses | auto_allocated_network=10.42.0.56, 192.168.112.167 | 2025-08-29 15:33:12.717831 | orchestrator | | config_drive | | 2025-08-29 15:33:12.717845 | orchestrator | | created | 2025-08-29T15:30:23Z | 2025-08-29 15:33:12.717858 | orchestrator | | description | None | 2025-08-29 15:33:12.717872 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:33:12.717885 | orchestrator | | hostId | a79b14b596b403368d67f7f141e6256071557ad54066033be23c3604 | 2025-08-29 15:33:12.717898 | orchestrator | | host_status | None | 2025-08-29 15:33:12.717911 | orchestrator | | id | d9e14c9b-e32c-426b-ada8-6de91923f972 | 2025-08-29 15:33:12.717931 | orchestrator | | image | Cirros 0.6.2 (4103d791-424a-4644-af98-1610d1f7142b) | 2025-08-29 15:33:12.717945 | orchestrator | | key_name | test | 2025-08-29 15:33:12.717958 | orchestrator | | locked | False | 2025-08-29 15:33:12.717971 | orchestrator | | locked_reason | None | 2025-08-29 15:33:12.717989 | orchestrator | | name | test-3 | 2025-08-29 15:33:12.718009 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:33:12.718070 | orchestrator | | progress | 0 | 2025-08-29 15:33:12.718082 | orchestrator | | project_id | 6a7e56d363bd45448a3c686437135859 | 2025-08-29 15:33:12.718093 | orchestrator | | properties | hostname='test-3' | 2025-08-29 15:33:12.718104 | orchestrator | | security_groups | name='ssh' | 2025-08-29 15:33:12.718116 | orchestrator | | | name='icmp' | 2025-08-29 15:33:12.718135 | orchestrator | | server_groups | None | 2025-08-29 15:33:12.718147 | orchestrator | | status | ACTIVE | 2025-08-29 15:33:12.718158 | orchestrator | | tags | test | 2025-08-29 15:33:12.718169 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:33:12.718181 | orchestrator | | updated | 2025-08-29T15:31:47Z | 2025-08-29 15:33:12.718199 | orchestrator | | user_id | efd55a9847274ffb99b47029718718dd | 2025-08-29 15:33:12.718211 | orchestrator | | volumes_attached | | 2025-08-29 15:33:12.723257 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:12.992501 | orchestrator | + openstack --os-cloud test server show test-4 2025-08-29 15:33:16.166448 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:16.166545 | orchestrator | | Field | Value | 2025-08-29 15:33:16.166562 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:16.166600 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 15:33:16.166696 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 15:33:16.166713 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 15:33:16.166725 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-08-29 15:33:16.166737 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 15:33:16.166781 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 15:33:16.166793 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 15:33:16.166804 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 15:33:16.166835 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 15:33:16.166848 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 15:33:16.166860 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 15:33:16.166882 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 15:33:16.166894 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 15:33:16.166905 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 15:33:16.166917 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 15:33:16.166928 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T15:31:16.000000 | 2025-08-29 15:33:16.166953 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 15:33:16.166971 | orchestrator | | accessIPv4 | | 2025-08-29 15:33:16.166994 | orchestrator | | accessIPv6 | | 2025-08-29 15:33:16.167007 | orchestrator | | addresses | auto_allocated_network=10.42.0.44, 192.168.112.104 | 2025-08-29 15:33:16.167030 | orchestrator | | config_drive | | 2025-08-29 15:33:16.167051 | orchestrator | | created | 2025-08-29T15:31:01Z | 2025-08-29 15:33:16.167064 | orchestrator | | description | None | 2025-08-29 15:33:16.167077 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 15:33:16.167090 | orchestrator | | hostId | 68bc2bbe6a576864266da2a871820b45cf7eaed0b42b939f73634ca5 | 2025-08-29 15:33:16.167103 | orchestrator | | host_status | None | 2025-08-29 15:33:16.167117 | orchestrator | | id | 5f1d4d9d-b258-4e6c-8a35-1ecefae5ecfc | 2025-08-29 15:33:16.167130 | orchestrator | | image | Cirros 0.6.2 (4103d791-424a-4644-af98-1610d1f7142b) | 2025-08-29 15:33:16.167142 | orchestrator | | key_name | test | 2025-08-29 15:33:16.167161 | orchestrator | | locked | False | 2025-08-29 15:33:16.167175 | orchestrator | | locked_reason | None | 2025-08-29 15:33:16.167188 | orchestrator | | name | test-4 | 2025-08-29 15:33:16.167215 | orchestrator | | pinned_availability_zone | None | 2025-08-29 15:33:16.167229 | orchestrator | | progress | 0 | 2025-08-29 15:33:16.167242 | orchestrator | | project_id | 6a7e56d363bd45448a3c686437135859 | 2025-08-29 15:33:16.167254 | orchestrator | | properties | hostname='test-4' | 2025-08-29 15:33:16.167267 | orchestrator | | security_groups | name='ssh' | 2025-08-29 15:33:16.167280 | orchestrator | | | name='icmp' | 2025-08-29 15:33:16.167293 | orchestrator | | server_groups | None | 2025-08-29 15:33:16.167307 | orchestrator | | status | ACTIVE | 2025-08-29 15:33:16.167324 | orchestrator | | tags | test | 2025-08-29 15:33:16.167337 | orchestrator | | trusted_image_certificates | None | 2025-08-29 15:33:16.167350 | orchestrator | | updated | 2025-08-29T15:31:52Z | 2025-08-29 15:33:16.167374 | orchestrator | | user_id | efd55a9847274ffb99b47029718718dd | 2025-08-29 15:33:16.167386 | orchestrator | | volumes_attached | | 2025-08-29 15:33:16.171685 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 15:33:16.436517 | orchestrator | + server_ping 2025-08-29 15:33:16.438181 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-08-29 15:33:16.438247 | orchestrator | ++ tr -d '\r' 2025-08-29 15:33:19.319617 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:33:19.319709 | orchestrator | + ping -c3 192.168.112.183 2025-08-29 15:33:19.333330 | orchestrator | PING 192.168.112.183 (192.168.112.183) 56(84) bytes of data. 2025-08-29 15:33:19.333411 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=1 ttl=63 time=7.36 ms 2025-08-29 15:33:20.329701 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=2 ttl=63 time=2.05 ms 2025-08-29 15:33:21.331368 | orchestrator | 64 bytes from 192.168.112.183: icmp_seq=3 ttl=63 time=2.15 ms 2025-08-29 15:33:21.331457 | orchestrator | 2025-08-29 15:33:21.331471 | orchestrator | --- 192.168.112.183 ping statistics --- 2025-08-29 15:33:21.331482 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 15:33:21.331493 | orchestrator | rtt min/avg/max/mdev = 2.053/3.854/7.363/2.481 ms 2025-08-29 15:33:21.332075 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:33:21.332106 | orchestrator | + ping -c3 192.168.112.102 2025-08-29 15:33:21.346167 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2025-08-29 15:33:21.346226 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=7.97 ms 2025-08-29 15:33:22.342790 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.79 ms 2025-08-29 15:33:23.343826 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.94 ms 2025-08-29 15:33:23.343919 | orchestrator | 2025-08-29 15:33:23.343935 | orchestrator | --- 192.168.112.102 ping statistics --- 2025-08-29 15:33:23.343947 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-08-29 15:33:23.343959 | orchestrator | rtt min/avg/max/mdev = 1.935/4.233/7.972/2.666 ms 2025-08-29 15:33:23.344199 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:33:23.344223 | orchestrator | + ping -c3 192.168.112.104 2025-08-29 15:33:23.357173 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2025-08-29 15:33:23.357234 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=7.60 ms 2025-08-29 15:33:24.353468 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.69 ms 2025-08-29 15:33:25.354617 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.95 ms 2025-08-29 15:33:25.354716 | orchestrator | 2025-08-29 15:33:25.354726 | orchestrator | --- 192.168.112.104 ping statistics --- 2025-08-29 15:33:25.354734 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 15:33:25.354741 | orchestrator | rtt min/avg/max/mdev = 1.947/4.078/7.600/2.508 ms 2025-08-29 15:33:25.355108 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:33:25.355212 | orchestrator | + ping -c3 192.168.112.113 2025-08-29 15:33:25.367635 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2025-08-29 15:33:25.367703 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=5.26 ms 2025-08-29 15:33:26.366641 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.16 ms 2025-08-29 15:33:27.367868 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=2.26 ms 2025-08-29 15:33:27.368820 | orchestrator | 2025-08-29 15:33:27.368840 | orchestrator | --- 192.168.112.113 ping statistics --- 2025-08-29 15:33:27.368846 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 15:33:27.368864 | orchestrator | rtt min/avg/max/mdev = 2.159/3.226/5.260/1.438 ms 2025-08-29 15:33:27.368878 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 15:33:27.368882 | orchestrator | + ping -c3 192.168.112.167 2025-08-29 15:33:27.381413 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-08-29 15:33:27.381454 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=7.92 ms 2025-08-29 15:33:28.377667 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.48 ms 2025-08-29 15:33:29.379492 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.03 ms 2025-08-29 15:33:29.379608 | orchestrator | 2025-08-29 15:33:29.379633 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-08-29 15:33:29.379653 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-08-29 15:33:29.379673 | orchestrator | rtt min/avg/max/mdev = 2.025/4.142/7.920/2.677 ms 2025-08-29 15:33:29.380179 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 15:33:29.574400 | orchestrator | ok: Runtime: 0:10:44.537504 2025-08-29 15:33:29.625777 | 2025-08-29 15:33:29.625948 | TASK [Run tempest] 2025-08-29 15:33:30.161452 | orchestrator | skipping: Conditional result was False 2025-08-29 15:33:30.178861 | 2025-08-29 15:33:30.179024 | TASK [Check prometheus alert status] 2025-08-29 15:33:30.715573 | orchestrator | skipping: Conditional result was False 2025-08-29 15:33:30.719467 | 2025-08-29 15:33:30.719659 | PLAY RECAP 2025-08-29 15:33:30.719814 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-08-29 15:33:30.719877 | 2025-08-29 15:33:30.980372 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 15:33:30.981506 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:33:31.737217 | 2025-08-29 15:33:31.737396 | PLAY [Post output play] 2025-08-29 15:33:31.754036 | 2025-08-29 15:33:31.754177 | LOOP [stage-output : Register sources] 2025-08-29 15:33:31.831214 | 2025-08-29 15:33:31.831588 | TASK [stage-output : Check sudo] 2025-08-29 15:33:32.714320 | orchestrator | sudo: a password is required 2025-08-29 15:33:32.870789 | orchestrator | ok: Runtime: 0:00:00.016226 2025-08-29 15:33:32.886906 | 2025-08-29 15:33:32.887092 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 15:33:32.928810 | 2025-08-29 15:33:32.929140 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 15:33:32.999112 | orchestrator | ok 2025-08-29 15:33:33.008513 | 2025-08-29 15:33:33.008661 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 15:33:33.452955 | orchestrator | ok: "docs" 2025-08-29 15:33:33.453306 | 2025-08-29 15:33:33.703604 | orchestrator | ok: "artifacts" 2025-08-29 15:33:33.949904 | orchestrator | ok: "logs" 2025-08-29 15:33:33.971438 | 2025-08-29 15:33:33.971660 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 15:33:34.014159 | 2025-08-29 15:33:34.014505 | TASK [stage-output : Make all log files readable] 2025-08-29 15:33:34.309884 | orchestrator | ok 2025-08-29 15:33:34.318776 | 2025-08-29 15:33:34.318941 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 15:33:34.354178 | orchestrator | skipping: Conditional result was False 2025-08-29 15:33:34.367690 | 2025-08-29 15:33:34.367824 | TASK [stage-output : Discover log files for compression] 2025-08-29 15:33:34.383802 | orchestrator | skipping: Conditional result was False 2025-08-29 15:33:34.392970 | 2025-08-29 15:33:34.393088 | LOOP [stage-output : Archive everything from logs] 2025-08-29 15:33:34.439763 | 2025-08-29 15:33:34.439947 | PLAY [Post cleanup play] 2025-08-29 15:33:34.448643 | 2025-08-29 15:33:34.448746 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:33:34.517066 | orchestrator | ok 2025-08-29 15:33:34.529836 | 2025-08-29 15:33:34.529952 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:33:34.564946 | orchestrator | skipping: Conditional result was False 2025-08-29 15:33:34.579705 | 2025-08-29 15:33:34.579835 | TASK [Clean the cloud environment] 2025-08-29 15:33:35.788592 | orchestrator | 2025-08-29 15:33:35 - clean up servers 2025-08-29 15:33:36.520655 | orchestrator | 2025-08-29 15:33:36 - testbed-manager 2025-08-29 15:33:36.605503 | orchestrator | 2025-08-29 15:33:36 - testbed-node-0 2025-08-29 15:33:36.692161 | orchestrator | 2025-08-29 15:33:36 - testbed-node-1 2025-08-29 15:33:36.785484 | orchestrator | 2025-08-29 15:33:36 - testbed-node-3 2025-08-29 15:33:36.872210 | orchestrator | 2025-08-29 15:33:36 - testbed-node-2 2025-08-29 15:33:36.958330 | orchestrator | 2025-08-29 15:33:36 - testbed-node-5 2025-08-29 15:33:37.040463 | orchestrator | 2025-08-29 15:33:37 - testbed-node-4 2025-08-29 15:33:37.129265 | orchestrator | 2025-08-29 15:33:37 - clean up keypairs 2025-08-29 15:33:37.145441 | orchestrator | 2025-08-29 15:33:37 - testbed 2025-08-29 15:33:37.168014 | orchestrator | 2025-08-29 15:33:37 - wait for servers to be gone 2025-08-29 15:33:47.988045 | orchestrator | 2025-08-29 15:33:47 - clean up ports 2025-08-29 15:33:48.172099 | orchestrator | 2025-08-29 15:33:48 - 01393b6d-8ffb-4fc5-b3c1-cfcad7fcead6 2025-08-29 15:33:48.456707 | orchestrator | 2025-08-29 15:33:48 - 4d8a1a6b-a3dc-4a9c-ba66-793eb6e70a04 2025-08-29 15:33:48.748598 | orchestrator | 2025-08-29 15:33:48 - 81c185fd-d021-4c92-8196-9262fd035146 2025-08-29 15:33:48.963204 | orchestrator | 2025-08-29 15:33:48 - 8a89b5f1-e199-4e86-ac55-23b13de73363 2025-08-29 15:33:49.388649 | orchestrator | 2025-08-29 15:33:49 - b513403d-9e33-4876-ac31-305677bf5499 2025-08-29 15:33:49.616245 | orchestrator | 2025-08-29 15:33:49 - bb4fa0bc-84a6-477a-bb43-9b86fa50f679 2025-08-29 15:33:49.853139 | orchestrator | 2025-08-29 15:33:49 - d5ec4786-2cc5-4f9f-8c7d-262430c6aba0 2025-08-29 15:33:50.097055 | orchestrator | 2025-08-29 15:33:50 - clean up volumes 2025-08-29 15:33:50.212235 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-2-node-base 2025-08-29 15:33:50.249924 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-4-node-base 2025-08-29 15:33:50.291727 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-5-node-base 2025-08-29 15:33:50.333219 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-3-node-base 2025-08-29 15:33:50.378603 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-1-node-base 2025-08-29 15:33:50.419719 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-0-node-base 2025-08-29 15:33:50.460455 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-manager-base 2025-08-29 15:33:50.503020 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-1-node-4 2025-08-29 15:33:50.543445 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-7-node-4 2025-08-29 15:33:50.586575 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-6-node-3 2025-08-29 15:33:50.626064 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-2-node-5 2025-08-29 15:33:50.668447 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-8-node-5 2025-08-29 15:33:50.707946 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-4-node-4 2025-08-29 15:33:50.749321 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-0-node-3 2025-08-29 15:33:50.794562 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-3-node-3 2025-08-29 15:33:50.838866 | orchestrator | 2025-08-29 15:33:50 - testbed-volume-5-node-5 2025-08-29 15:33:50.882509 | orchestrator | 2025-08-29 15:33:50 - disconnect routers 2025-08-29 15:33:50.956975 | orchestrator | 2025-08-29 15:33:50 - testbed 2025-08-29 15:33:51.969188 | orchestrator | 2025-08-29 15:33:51 - clean up subnets 2025-08-29 15:33:52.016484 | orchestrator | 2025-08-29 15:33:52 - subnet-testbed-management 2025-08-29 15:33:52.182311 | orchestrator | 2025-08-29 15:33:52 - clean up networks 2025-08-29 15:33:52.340377 | orchestrator | 2025-08-29 15:33:52 - net-testbed-management 2025-08-29 15:33:52.635274 | orchestrator | 2025-08-29 15:33:52 - clean up security groups 2025-08-29 15:33:52.674508 | orchestrator | 2025-08-29 15:33:52 - testbed-management 2025-08-29 15:33:52.798580 | orchestrator | 2025-08-29 15:33:52 - testbed-node 2025-08-29 15:33:53.448883 | orchestrator | 2025-08-29 15:33:53 - clean up floating ips 2025-08-29 15:33:53.482351 | orchestrator | 2025-08-29 15:33:53 - 81.163.193.18 2025-08-29 15:33:54.298528 | orchestrator | 2025-08-29 15:33:54 - clean up routers 2025-08-29 15:33:54.371972 | orchestrator | 2025-08-29 15:33:54 - testbed 2025-08-29 15:33:55.638788 | orchestrator | ok: Runtime: 0:00:20.339416 2025-08-29 15:33:55.641297 | 2025-08-29 15:33:55.641397 | PLAY RECAP 2025-08-29 15:33:55.641476 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 15:33:55.641502 | 2025-08-29 15:33:55.784800 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:33:55.787248 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:33:56.536925 | 2025-08-29 15:33:56.537088 | PLAY [Cleanup play] 2025-08-29 15:33:56.553550 | 2025-08-29 15:33:56.553705 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:33:56.622979 | orchestrator | ok 2025-08-29 15:33:56.633520 | 2025-08-29 15:33:56.633669 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:33:56.669303 | orchestrator | skipping: Conditional result was False 2025-08-29 15:33:56.686816 | 2025-08-29 15:33:56.686998 | TASK [Clean the cloud environment] 2025-08-29 15:33:57.775059 | orchestrator | 2025-08-29 15:33:57 - clean up servers 2025-08-29 15:33:58.238581 | orchestrator | 2025-08-29 15:33:58 - clean up keypairs 2025-08-29 15:33:58.251387 | orchestrator | 2025-08-29 15:33:58 - wait for servers to be gone 2025-08-29 15:33:58.287826 | orchestrator | 2025-08-29 15:33:58 - clean up ports 2025-08-29 15:33:58.357846 | orchestrator | 2025-08-29 15:33:58 - clean up volumes 2025-08-29 15:33:58.421717 | orchestrator | 2025-08-29 15:33:58 - disconnect routers 2025-08-29 15:33:58.460364 | orchestrator | 2025-08-29 15:33:58 - clean up subnets 2025-08-29 15:33:58.485664 | orchestrator | 2025-08-29 15:33:58 - clean up networks 2025-08-29 15:33:58.607021 | orchestrator | 2025-08-29 15:33:58 - clean up security groups 2025-08-29 15:33:58.638526 | orchestrator | 2025-08-29 15:33:58 - clean up floating ips 2025-08-29 15:33:58.658683 | orchestrator | 2025-08-29 15:33:58 - clean up routers 2025-08-29 15:33:59.232484 | orchestrator | ok: Runtime: 0:00:01.250701 2025-08-29 15:33:59.237352 | 2025-08-29 15:33:59.237591 | PLAY RECAP 2025-08-29 15:33:59.237732 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 15:33:59.237801 | 2025-08-29 15:33:59.374163 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:33:59.375270 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:34:00.144964 | 2025-08-29 15:34:00.145168 | PLAY [Base post-fetch] 2025-08-29 15:34:00.161483 | 2025-08-29 15:34:00.161631 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 15:34:00.217077 | orchestrator | skipping: Conditional result was False 2025-08-29 15:34:00.225017 | 2025-08-29 15:34:00.225173 | TASK [fetch-output : Set log path for single node] 2025-08-29 15:34:00.268566 | orchestrator | ok 2025-08-29 15:34:00.275720 | 2025-08-29 15:34:00.275840 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 15:34:00.778666 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/work/logs" 2025-08-29 15:34:01.055669 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/work/artifacts" 2025-08-29 15:34:01.333186 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/23526c7215b24797be8b5b736ada2e27/work/docs" 2025-08-29 15:34:01.355140 | 2025-08-29 15:34:01.355329 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 15:34:02.287102 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:34:02.287429 | orchestrator | changed: All items complete 2025-08-29 15:34:02.287477 | 2025-08-29 15:34:03.003075 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:34:03.675573 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:34:03.690604 | 2025-08-29 15:34:03.690762 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 15:34:03.718033 | orchestrator | skipping: Conditional result was False 2025-08-29 15:34:03.721997 | orchestrator | skipping: Conditional result was False 2025-08-29 15:34:03.734109 | 2025-08-29 15:34:03.734197 | PLAY RECAP 2025-08-29 15:34:03.734255 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 15:34:03.734282 | 2025-08-29 15:34:03.873718 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:34:03.876291 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:34:04.642929 | 2025-08-29 15:34:04.643093 | PLAY [Base post] 2025-08-29 15:34:04.658016 | 2025-08-29 15:34:04.658150 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 15:34:05.679999 | orchestrator | changed 2025-08-29 15:34:05.690226 | 2025-08-29 15:34:05.690369 | PLAY RECAP 2025-08-29 15:34:05.690469 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 15:34:05.690542 | 2025-08-29 15:34:05.816775 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:34:05.819371 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 15:34:06.605199 | 2025-08-29 15:34:06.605386 | PLAY [Base post-logs] 2025-08-29 15:34:06.617075 | 2025-08-29 15:34:06.617223 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 15:34:07.072886 | localhost | changed 2025-08-29 15:34:07.092990 | 2025-08-29 15:34:07.093188 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 15:34:07.121134 | localhost | ok 2025-08-29 15:34:07.125990 | 2025-08-29 15:34:07.126116 | TASK [Set zuul-log-path fact] 2025-08-29 15:34:07.142552 | localhost | ok 2025-08-29 15:34:07.153649 | 2025-08-29 15:34:07.153768 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 15:34:07.181825 | localhost | ok 2025-08-29 15:34:07.189114 | 2025-08-29 15:34:07.189299 | TASK [upload-logs : Create log directories] 2025-08-29 15:34:07.705996 | localhost | changed 2025-08-29 15:34:07.709010 | 2025-08-29 15:34:07.709128 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 15:34:08.199732 | localhost -> localhost | ok: Runtime: 0:00:00.006537 2025-08-29 15:34:08.207587 | 2025-08-29 15:34:08.207781 | TASK [upload-logs : Upload logs to log server] 2025-08-29 15:34:08.784879 | localhost | Output suppressed because no_log was given 2025-08-29 15:34:08.788321 | 2025-08-29 15:34:08.788537 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 15:34:08.844525 | localhost | skipping: Conditional result was False 2025-08-29 15:34:08.850039 | localhost | skipping: Conditional result was False 2025-08-29 15:34:08.860457 | 2025-08-29 15:34:08.860641 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 15:34:08.911239 | localhost | skipping: Conditional result was False 2025-08-29 15:34:08.912030 | 2025-08-29 15:34:08.915122 | localhost | skipping: Conditional result was False 2025-08-29 15:34:08.923097 | 2025-08-29 15:34:08.923289 | LOOP [upload-logs : Upload console log and json output]